Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
ERIC Educational Resources Information Center
British Standards Institution, London (England).
To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
A Block-LU Update for Large-Scale Linear Programming
1990-01-01
linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and
Falcone, John L; Middleton, Donald B
2013-01-01
The Accreditation Council for Graduate Medical Education (ACGME) sets residency performance standards for the American Board of Family Medicine Certification Examination. This study aims are to describe the compliance of residency programs with ACGME standards and to determine whether residency pass rates depend on program size and location. In this retrospective cohort study, residency performance from 2007 to 2011 was compared with the ACGME performance standards. Simple linear regression was performed to see whether program pass rates were dependent on program size. Regional differences in performance were compared with χ(2) tests, using an α level of 0.05. Of 429 total residency programs, there were 205 (47.8%) that violate ACGME performance standards. Linear regression showed that program pass rates were positively correlated and dependent on program size (P < .001). The median pass rate per state was 86.4% (interquartile range, 82.0-90.8. χ(2) Tests showed that states in the West performed higher than the other 3 US Census Bureau Regions (all P < .001). Approximately half of the family medicine training programs do not meet the ACGME examination performance standards. Pass rates are associated with residency program size, and regional variation occurs. These findings have the potential to affect ACGME policy and residency program application patterns.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
USDA-ARS?s Scientific Manuscript database
Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed inter...
Airborne Tactical Crossload Planner
2017-12-01
set out in the Airborne Standard Operating Procedure (ASOP). 14. SUBJECT TERMS crossload, airborne, optimization, integer linear programming ...they land to their respective sub-mission locations. In this thesis, we formulate and implement an integer linear program called the Tactical...to meet any desired crossload objectives. xiv We demonstrate TCP with two real-world tactical problems from recent airborne operations: one by the
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer
2016-01-01
The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
NASA Technical Reports Server (NTRS)
Huang, L. C. P.; Cook, R. A.
1973-01-01
Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.
Introductory Linear Regression Programs in Undergraduate Chemistry.
ERIC Educational Resources Information Center
Gale, Robert J.
1982-01-01
Presented are simple programs in BASIC and FORTRAN to apply the method of least squares. They calculate gradients and intercepts and express errors as standard deviations. An introduction of undergraduate students to such programs in a chemistry class is reviewed, and issues instructors should be aware of are noted. (MP)
Duality in non-linear programming
NASA Astrophysics Data System (ADS)
Jeyalakshmi, K.
2018-04-01
In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
Trelease, R B; Nieder, G L; Dørup, J; Hansen, M S
2000-04-15
Continuing evolution of computer-based multimedia technologies has produced QuickTime, a multiplatform digital media standard that is supported by stand-alone commercial programs and World Wide Web browsers. While its core functions might be most commonly employed for production and delivery of conventional video programs (e.g., lecture videos), additional QuickTime VR "virtual reality" features can be used to produce photorealistic, interactive "non-linear movies" of anatomical structures ranging in size from microscopic through gross anatomic. But what is really included in QuickTime VR and how can it be easily used to produce novel and innovative visualizations for education and research? This tutorial introduces the QuickTime multimedia environment, its QuickTime VR extensions, basic linear and non-linear digital video technologies, image acquisition, and other specialized QuickTime VR production methods. Four separate practical applications are presented for light and electron microscopy, dissectable preserved specimens, and explorable functional anatomy in magnetic resonance cinegrams.
Algorithmic Trading with Developmental and Linear Genetic Programming
NASA Astrophysics Data System (ADS)
Wilson, Garnett; Banzhaf, Wolfgang
A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.
Using linear programming to minimize the cost of nurse personnel.
Matthews, Charles H
2005-01-01
Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
SUBOPT: A CAD program for suboptimal linear regulators
NASA Technical Reports Server (NTRS)
Fleming, P. J.
1985-01-01
An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.
NASA pyrotechnically actuated systems program
NASA Technical Reports Server (NTRS)
Schulze, Norman R.
1993-01-01
The Office of Safety and Mission Quality initiated a Pyrotechnically Actuated Systems (PAS) Program in FY-92 to address problems experienced with pyrotechnically actuated systems and devices used both on the ground and in flight. The PAS Program will provide the technical basis for NASA's projects to incorporate new technological developments in operational systems. The program will accomplish that objective by developing/testing current and new hardware designs for flight applications and by providing a pyrotechnic data base. This marks the first applied pyrotechnic technology program funded by NASA to address pyrotechnic issues. The PAS Program has been structured to address the results of a survey of pyrotechnic device and system problems with the goal of alleviating or minimizing their risks. Major program initiatives include the development of a Laser Initiated Ordnance System, a pyrotechnic systems data base, NASA Standard Initiator model, a NASA Standard Linear Separation System and a NASA Standard Gas Generator. The PAS Program sponsors annual aerospace pyrotechnic systems workshops.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
NASA Astrophysics Data System (ADS)
Ryan, D. P.; Roth, G. S.
1982-04-01
Complete documentation of the 15 programs and 11 data files of the EPA Atomic Absorption Instrument Automation System is presented. The system incorporates the following major features: (1) multipoint calibration using first, second, or third degree regression or linear interpolation, (2) timely quality control assessments for spiked samples, duplicates, laboratory control standards, reagent blanks, and instrument check standards, (3) reagent blank subtraction, and (4) plotting of calibration curves and raw data peaks. The programs of this system are written in Data General Extended BASIC, Revision 4.3, as enhanced for multi-user, real-time data acquisition. They run in a Data General Nova 840 minicomputer under the operating system RDOS, Revision 6.2. There is a functional description, a symbol definitions table, a functional flowchart, a program listing, and a symbol cross reference table for each program. The structure of every data file is also detailed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, T.; et al.
This Resource Book reviews the physics opportunities of a next-generation e+e- linear collider and discusses options for the experimental program. Part 3 reviews the possible experiments on that can be done at a linear collider on strongly coupled electroweak symmetry breaking, exotic particles, and extra dimensions, and on the top quark, QCD, and two-photon physics. It also discusses the improved precision electroweak measurements that this collider will make available.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Parlesak, Alexandr; Geelhoed, Diederike; Robertson, Aileen
2014-06-01
Chronic undernutrition is prevalent in Mozambique, where children suffer from stunting, vitamin A deficiency, anemia, and other nutrition-related disorders. Complete diet formulation products (CDFPs) are increasingly promoted to prevent chronic undernutrition. Using linear programming, to investigate whether diet diversification using local foods should be prioritized in order to reduce the prevalence of chronic undernutrition. Market prices of local foods were collected in Tete City, Mozambique. Linear programming was applied to calculate the cheapest possible fully nutritious food baskets (FNFB) by stepwise addition of micronutrient-dense localfoods. Only the top quintile of Mozambican households, using average expenditure data, could afford the FNFB that was designed using linear programming from a spectrum of local standard foods. The addition of beef heart or liver, dried fish and fresh moringa leaves, before applying linear programming decreased the price by a factor of up to 2.6. As a result, the top three quintiles could afford the FNFB optimized using both diversification strategy and linear programming. CDFPs, when added to the baskets, were unable to overcome the micronutrient gaps without greatly exceeding recommended energy intakes, due to their high ratio of energy to micronutrient density. Dietary diversification strategies using local, low-cost, nutrient-dense foods can meet all micronutrient recommendations and overcome all micronutrient gaps. The success of linear programming to identify a low-cost FNFB depends entirely on the investigators' ability to select appropriate micronutrient-dense foods. CDFPs added to food baskets are unable to overcome micronutrient gaps without greatly exceeding recommended energy intake.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
A Whirlwind Tour of Computational Geometry.
ERIC Educational Resources Information Center
Graham, Ron; Yao, Frances
1990-01-01
Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
NASA Astrophysics Data System (ADS)
McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.
2012-03-01
The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Linear programming phase unwrapping for dual-wavelength digital holography.
Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand
2017-01-20
A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
Kozhimannil, Katy Backes; Valera, Madeleine R; Adams, Alyce S; Ross-Degnan, Dennis
2009-09-01
Adequate prenatal and delivery care are vital components of successful maternal health care provision. Starting in 1998, two programs were widely expanded in the Philippines: a national health insurance program (PhilHealth); and a donor-funded franchise of midwife clinics (Well Family Midwife Clinics). This paper examines population-level impacts of these interventions on achievement of minimum standards for prenatal and delivery care. Data from two waves of the Demographic and Health Surveys, conducted before (1998) and after (2003) scale-up of the interventions, are employed in a pre/post-study design, using longitudinal multivariate logistic and linear regression models. After controlling for demographic and socioeconomic characteristics, the PhilHealth insurance program scale-up was associated with increased odds of receiving at least four prenatal visits (OR 1.04 [95% CI 1.01-1.06]) and receiving a visit during the first trimester of pregnancy (OR 1.03 [95% CI 1.01-1.06]). Exposure to midwife clinics was not associated with significant changes in achievement of prenatal care standards. While both programs were associated with slight increases in the odds of delivery in a health facility, these increases were not statistically significant. These results suggest that expansion of an insurance program with accreditation standards was associated with increases in achievement of minimal standards for prenatal care among women in the Philippines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Kozhimannil, Katy Backes; Valera, Madeleine R.; Adams, Alyce S.; Ross-Degnan, Dennis
2009-01-01
Objectives Adequate prenatal and delivery care are vital components of successful maternal health care provision. Starting in 1998, two programs were widely expanded in the Philippines: a national health insurance program (PhilHealth); and a donor-funded franchise of midwife clinics (Well-Family Midwife Clinics). This paper examines population-level impacts of these interventions on achievement of minimum standards for prenatal and delivery care. Methods Data from two waves of the Demographic and Health Surveys, conducted before (1998) and after (2003) scale up of the interventions, are employed in a pre/post study design, using longitudinal multivariate logistic and linear regression models. Results After controlling for demographic and socioeconomic characteristics, the PhilHealth insurance program scale up was associated with increased odds of receiving at least four prenatal visits (OR 1.04 [95% CI 1.01–1.06]) and receiving a visit during the first trimester of pregnancy (OR 1.03 [95% CI 1.01–1.06]). Exposure to midwife clinics was not associated with significant changes in achievement of prenatal care standards. While both programs were associated with slight increases in the odds of delivery in a health facility, these increases were not statistically significant. Conclusions These results suggest that expansion of an insurance program with accreditation standards was associated with increases in achievement of minimal standards for prenatal care among women in the Philippines. PMID:19327862
NASA Technical Reports Server (NTRS)
Morozov, S. K.; Krasitskiy, O. P.
1978-01-01
A computational scheme and a standard program is proposed for solving systems of nonstationary spatially one-dimensional nonlinear differential equations using Newton's method. The proposed scheme is universal in its applicability and its reduces to a minimum the work of programming. The program is written in the FORTRAN language and can be used without change on electronic computers of type YeS and BESM-6. The standard program described permits the identification of nonstationary (or stationary) solutions to systems of spatially one-dimensional nonlinear (or linear) partial differential equations. The proposed method may be used to solve a series of geophysical problems which take chemical reactions, diffusion, and heat conductivity into account, to evaluate nonstationary thermal fields in two-dimensional structures when in one of the geometrical directions it can take a small number of discrete levels, and to solve problems in nonstationary gas dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-10-01
AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Létourneau, Daniel, E-mail: daniel.letourneau@rmp.uh.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; McNiven, Andrea
2013-05-01
Purpose: The objective of this work was to develop a collaborative quality assurance (CQA) program to assess the performance of intensity modulated radiation therapy (IMRT) planning and delivery across the province of Ontario, Canada. Methods and Materials: The CQA program was designed to be a comprehensive end-to-end test that can be completed on multiple planning and delivery platforms. The first year of the program included a head-and-neck (H and N) planning exercise and on-site visit to acquire dosimetric measurements to assess planning and delivery performance. A single dosimeter was used at each institution, and the planned to measured dose agreementmore » was evaluated for both the H and N plan and a standard plan (linear-accelerator specific) that was created to enable a direct comparison between centers with similar infrastructure. Results: CQA program feasibility was demonstrated through participation of all 13 radiation therapy centers in the province. Planning and delivery was completed on a variety of infrastructure (treatment planning systems and linear accelerators). The planning exercise was completed using both static gantry and rotational IMRT, and planned-to-delivered dose agreement (pass rates) for 3%/3-mm gamma evaluation were greater than 90% (92.6%-99.6%). Conclusions: All centers had acceptable results, but variation in planned to delivered dose agreement for the same planning and delivery platform was noted. The upper end of the range will provide an achievable target for other centers through continued quality improvement, aided by feedback provided by the program through the use of standard plans and simple test fields.« less
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Handling Math Expressions in Economics: Recoding Spreadsheet Teaching Tool of Growth Models
ERIC Educational Resources Information Center
Moro-Egido, Ana I.; Pedauga, Luis E.
2017-01-01
In the present paper, we develop a teaching methodology for economic theory. The main contribution of this paper relies on combining the interactive characteristics of spreadsheet programs such as Excel and Unicode plain-text linear format for mathematical expressions. The advantage of Unicode standard rests on its ease for writing and reading…
Simultaneous multielement atomic absorption spectrometry with graphite furnace atomization
NASA Astrophysics Data System (ADS)
Harnly, James M.; Miller-Ihli, Nancy J.; O'Haver, Thomas C.
The extended analytical range capability of a simultaneous multielement atomic absorption continuum source spectrometer (SIMAAC) was tested for furnace atomization with respect to the signal measurement mode (peak height and area), the atomization mode (from the wall or from a platform), and the temperature program mode (stepped or ramped atomization). These parameters were evaluated with respect to the shapes of the analytical curves, the detection limits, carry-over contamination and accuracy. Peak area measurements gave more linear calibration curves. Methods for slowing the atomization step heating rate, the use of a ramped temperature program or a platform, produced similar calibration curves and longer linear ranges than atomization with a stepped temperature program. Peak height detection limits were best using stepped atomization from the wall. Peak area detection limits for all atomization modes were similar. Carry-over contamination was worse for peak area than peak height, worse for ramped atomization than stepped atomization, and worse for atomization from a platform than from the wall. Accurate determinations (100 ± 12% for Ca, Cu, Fe, Mn, and Zn in National Bureau of Standards' Standard Reference Materials Bovine Liver 1577 and Rice Flour 1568 were obtained using peak area measurements with ramped atomization from the wall and stepped atomization from a platform. Only stepped atomization from a platform gave accurate recoveries for K. Accurate recoveries, 100 ± 10%, with precisions ranging from 1 to 36 % (standard deviation), were obtained for the determination of Al, Co, Cr, Fe, Mn, Mo, Ni. Pb, V and Zn in Acidified Waters (NBS SRM 1643 and 1643a) using stepped atomization from a platform.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall
NASA Technical Reports Server (NTRS)
Clayton, D. D.
1984-01-01
Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, L.S.; Quintana, V.H.; Vannelli, A.
This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
ERIC Educational Resources Information Center
Moody, John Charles
Assessed were the effects of linear and modified linear programed materials on the achievement of slow learners in tenth grade Biological Sciences Curriculum Study (BSCS) Special Materials biology. Two hundred and six students were randomly placed into four programed materials formats: linear programed materials, modified linear program with…
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
Pluchino, Alessandra; Lee, Sae Yong; Asfour, Shihab; Roos, Bernard A; Signorile, Joseph F
2012-07-01
To compare the impacts of Tai Chi, a standard balance exercise program, and a video game balance board program on postural control and perceived falls risk. Randomized controlled trial. Research laboratory. Independent seniors (N=40; 72.5±8.40) began the training, 27 completed. Tai Chi, a standard balance exercise program, and a video game balance board program. The following were used as measures: Timed Up & Go, One-Leg Stance, functional reach, Tinetti Performance Oriented Mobility Assessment, force plate center of pressure (COP) and time to boundary, dynamic posturography (DP), Falls Risk for Older People-Community Setting, and Falls Efficacy Scale. No significant differences were seen between groups for any outcome measures at baseline, nor were significant time or group × time differences for any field test or questionnaire. No group × time differences were seen for any COP measures; however, significant time differences were seen for total COP, 3 of 4 anterior/posterior displacement and both velocity, and 1 displacement and 1 velocity medial/lateral measure across time for the entire sample. For DP, significant improvements in the overall score (dynamic movement analysis score), and in 2 of the 3 linear and angular measures were seen for the sample. The video game balance board program, which can be performed at home, was as effective as Tai Chi and the standard balance exercise program in improving postural control and balance dictated by the force plate postural sway and DP measures. This finding may have implications for exercise adherence because the at-home nature of the intervention eliminates many obstacles to exercise training. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs
NASA Astrophysics Data System (ADS)
Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure
Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Reference measurement procedure for total glycerides by isotope dilution GC-MS.
Edwards, Selvin H; Stribling, Shelton L; Pyatt, Susan D; Kimberly, Mary M
2012-04-01
The CDC's Lipid Standardization Program established the chromotropic acid (CA) reference measurement procedure (RMP) as the accuracy base for standardization and metrological traceability for triglyceride testing. The CA RMP has several disadvantages, including lack of ruggedness. It uses obsolete instrumentation and hazardous reagents. To overcome these problems the CDC developed an isotope dilution GC-MS (ID-GC-MS) RMP for total glycerides in serum. We diluted serum samples with Tris-HCl buffer solution and spiked 200-μL aliquots with [(13)C(3)]-glycerol. These samples were incubated and hydrolyzed under basic conditions. The samples were dried, derivatized with acetic anhydride and pyridine, extracted with ethyl acetate, and analyzed by ID-GC-MS. Linearity, imprecision, and accuracy were evaluated by analyzing calibrator solutions, 10 serum pools, and a standard reference material (SRM 1951b). The calibration response was linear for the range of calibrator concentrations examined (0-1.24 mmol/L) with a slope and intercept of 0.717 (95% CI, 0.7123-0.7225) and 0.3122 (95% CI, 0.3096-0.3140), respectively. The limit of detection was 14.8 μmol/L. The mean %CV for the sample set (serum pools and SRM) was 1.2%. The mean %bias from NIST isotope dilution MS values for SRM 1951b was 0.7%. This ID-GC-MS RMP has the specificity and ruggedness to accurately quantify total glycerides in the serum pools used in the CDC's Lipid Standardization Program and demonstrates sufficiently acceptable agreement with the NIST primary RMP for total glyceride measurement.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
New assay of protective activity of Rocky Mountain spotted fever vaccines.
Anacker, R L; Smith, R F; Mann, R E; Hamilton, M A
1976-01-01
Areas under the fever curves of guinea pigs inoculated with Rocky Mountain spotted fever vaccine over a restricted dose range and infected with a standardized dose of Rickettsia rickettsii varied linearly with log10 dose of vaccine. A calculator was programmed to plot fever curves and calculate the vaccine dose that reduced the fever of infected animals by 50%. PMID:823177
NASA Astrophysics Data System (ADS)
Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore
2009-03-01
We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).
An Instructional Note on Linear Programming--A Pedagogically Sound Approach.
ERIC Educational Resources Information Center
Mitchell, Richard
1998-01-01
Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)
Polarimetric measures of selected variable stars
NASA Astrophysics Data System (ADS)
Elias, N. M., II; Koch, R. H.; Pfeiffer, R. J.
2008-10-01
Aims: The purpose of this paper is to summarize and interpret unpublished optical polarimetry for numerous program stars that were observed over the past decades at the Flower and Cook Observatory (FCO), University of Pennsylvania. We also make the individual calibrated measures available for long-term comparisons with new data. Methods: We employ three techniques to search for intrinsic variability within each dataset. First, when the observations for a given star and filter are numerous enough and when a period has been determined previously via photometry or spectroscopy, the polarimetric measures are plotted versus phase. If a statistically significant pattern appears, we attribute it to intrinsic variability. Second, we compare means of the FCO data to means from other workers. If they are statistically different, we conclude that the object exhibits long-term intrinsic variability. Third, we calculate the standard deviation for each program star and filter and compare it to the standard deviation estimated from comparable polarimetric standards. If the standard deviation of the program star is at least three times the value estimated from the polarimetric standards, the former is considered intrinsically variable. All of these statements are strengthened when variability appears in multiple filters. Results: We confirm the existence of an electron-scattering cloud at L1 in the β Per system, and find that LY Aur and HR 8281 possess scattering envelopes. Intrinsic polarization was detected for Nova Cas 1993 as early as day +3. We detected polarization variability near the primary eclipse of 32 Cyg. There is marginal evidence for polarization variability of the β Cepheid type star γ Peg. The other objects of this class exhibited no variability. All but one of the β Cepheid objects (ES Vul) fall on a tight linear relationship between linear polarization and E(B-V), in spite of the fact that the stars lay along different lines of sight. This dependence falls slightly below the classical upper limit of Serkowski, Mathewson, and Ford. The table, which contains the polarization observations of the program stars discussed in this paper, is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/489/911
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Analysis of Slope Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2005-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
NASA Technical Reports Server (NTRS)
Folta, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Corner Polyhedron and Intersection Cuts
2011-03-01
any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a...19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98... finding valid inequalities for the set (1) that are violated by the point x̄. Typically, x̄ is an optimal solution of the linear programming (LP
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.
1987-01-01
A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.
[The linear dimensions of human body measurements of Chinese male pilots in standing posture].
Guo, Xiao-chao; Liu, Bao-shan; Xiao, Hui; Wang, Zhi-jie; Li, Rong; Guo, Hui
2003-02-01
To provide the latest anthropometric data of Chinese male pilots on a large scale. 94 linear dimensions of human body measurements were defined, of which there are 42 fundamental items and 52 recommended items. The computer databanks were programmed, in which the subprograms were preset for data checking such as extreme value examination, logical judgement for data relationship, and measuring-remeasuring difference test. All workers were well trained before pilot measurements. 1739 male pilots from China Air Force was measured for the 42 fundamental items, and of which 904 pilots were measured for the 52 recommended items. Mean, standard deviation, the maximum value, the minimal value, and the 5th, 50th, 95th percentile data of all the 94 items were given. The quality of the data was stable and reliable. All data of the 94 linear dimensions of human body measurements were valid and reliable with high precision.
National economic models of industrial water use and waste treatment. [technology transfer
NASA Technical Reports Server (NTRS)
Thompson, R. G.; Calloway, J. A.
1974-01-01
The effects of air emission and solid waste restrictions on production costs and resource use by industry is investigated. A linear program is developed to analyze how resource use, production cost, and waste discharges in different types of production may be affected by resource limiting policies of the government. The method is applied to modeling ethylene and ammonia plants at the design stage. Results show that the effects of increasingly restrictive wastewater effluent standards on increased energy use were small in both plants. Plant models were developed for other industries and the program estimated effects of wastewater discharge policies on production costs of industry.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
Implementation of context independent code on a new array processor: The Super-65
NASA Technical Reports Server (NTRS)
Colbert, R. O.; Bowhill, S. A.
1981-01-01
The feasibility of rewriting standard uniprocessor programs into code which contains no context-dependent branches is explored. Context independent code (CIC) would contain no branches that might require different processing elements to branch different ways. In order to investigate the possibilities and restrictions of CIC, several programs were recoded into CIC and a four-element array processor was built. This processor (the Super-65) consisted of three 6502 microprocessors and the Apple II microcomputer. The results obtained were somewhat dependent upon the specific architecture of the Super-65 but within bounds, the throughput of the array processor was found to increase linearly with the number of processing elements (PEs). The slope of throughput versus PEs is highly dependent on the program and varied from 0.33 to 1.00 for the sample programs.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Bohling, Geoffrey C.; Butler, J.J.
2001-01-01
We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
High-performance liquid chromatographic analysis of methadone hydrochloride oral solution.
Beasley, T H; Ziegler, H W
1977-12-01
A direct and rapid high-performance liquid chromatographic assay for methadone hydrochloride in a flavored oral solution dosage form is described. A syrup sample, one part diluted with three parts of water, is introduced onto a column packed with octadecylsilane bonded on 10 micrometer porous silica gel (reversed phase). A formic acid-ammonium formate-buffered mobile phase is linear programmed with acetonitrile. The absorbance is monitored continuously at 280 or 254 nm, using a flow-through, UV, double-beam photometer. An aqueous methadone hydrochloride solution is used for external standardization. The relative standard deviation was not more than 1.0%. Drug recovery from a syrup base was better than 99.8%.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.
ERIC Educational Resources Information Center
Foytik, Jerry
1981-01-01
Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)
NASA Astrophysics Data System (ADS)
Kusumawati, Rosita; Subekti, Retno
2017-04-01
Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.
Graphite grain-size spectrum and molecules from core-collapse supernovae
NASA Astrophysics Data System (ADS)
Clayton, Donald D.; Meyer, Bradley S.
2018-01-01
Our goal is to compute the abundances of carbon atomic complexes that emerge from the C + O cores of core-collapse supernovae. We utilize our chemical reaction network in which every atomic step of growth employs a quantum-mechanically guided reaction rate. This tool follows step-by-step the growth of linear carbon chain molecules from C atoms in the oxygen-rich C + O cores. We postulate that once linear chain molecules reach a sufficiently large size, they isomerize to ringed molecules, which serve as seeds for graphite grain growth. We demonstrate our technique for merging the molecular reaction network with a parallel program that can follow 1017 steps of C addition onto the rare seed species. Due to radioactivity within the C + O core, abundant ambient oxygen is unable to convert C to CO, except to a limited degree that actually facilitates carbon molecular ejecta. But oxygen severely minimizes the linear-carbon-chain abundances. Despite the tiny abundances of these linear-carbon-chain molecules, they can give rise to a small abundance of ringed-carbon molecules that serve as the nucleations on which graphite grain growth builds. We expand the C + O-core gas adiabatically from 6000 K for 109 s when reactions have essentially stopped. These adiabatic tracks emulate the actual expansions of the supernova cores. Using a standard model of 1056 atoms of C + O core ejecta having O/C = 3, we calculate standard ejection yields of graphite grains of all sizes produced, of the CO molecular abundance, of the abundances of linear-carbon molecules, and of Buckminsterfullerene. None of these except CO was expected from the C + O cores just a few years past.
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
Generic Kalman Filter Software
NASA Technical Reports Server (NTRS)
Lisano, Michael E., II; Crues, Edwin Z.
2005-01-01
The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.
London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure
Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith
2017-01-01
Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343
Gullo, Charles A
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large.
Preliminary demonstration of a robust controller design method
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1980-01-01
Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.
Comparative Effectiveness of After-School Programs to Increase Physical Activity
Gesell, Sabina B.; Sommer, Evan C.; Lambert, E. Warren; Vides de Andrade, Ana Regina; Davis, Lauren; Beech, Bettina M.; Mitchell, Stephanie J.; Neloms, Stevon; Ryan, Colleen K.
2013-01-01
Background. We conducted a comparative effectiveness analysis to evaluate the difference in the amount of physical activity children engaged in when enrolled in a physical activity-enhanced after-school program based in a community recreation center versus a standard school-based after-school program. Methods. The study was a natural experiment with 54 elementary school children attending the community ASP and 37 attending the school-based ASP. Accelerometry was used to measure physical activity. Data were collected at baseline, 6 weeks, and 12 weeks, with 91% retention. Results. At baseline, 43% of the multiethnic sample was overweight/obese, and the mean age was 7.9 years (SD = 1.7). Linear latent growth models suggested that the average difference between the two groups of children at Week 12 was 14.7 percentage points in moderate-vigorous physical activity (P < .001). Cost analysis suggested that children attending traditional school-based ASPs—at an average cost of $17.67 per day—would need an additional daily investment of $1.59 per child for 12 weeks to increase their moderate-vigorous physical activity by a model-implied 14.7 percentage points. Conclusions. A low-cost, alternative after-school program featuring adult-led physical activities in a community recreation center was associated with increased physical activity compared to standard-of-care school-based after-school program. PMID:23984052
Ranking Surgical Residency Programs: Reputation Survey or Outcomes Measures?
Wilson, Adam B; Torbeck, Laura J; Dunnington, Gary L
2015-01-01
The release of general surgery residency program rankings by Doximity and U.S. News & World Report accentuates the need to define and establish measurable standards of program quality. This study evaluated the extent to which program rankings based solely on peer nominations correlated with familiar program outcomes measures. Publicly available data were collected for all 254 general surgery residency programs. To generate a rudimentary outcomes-based program ranking, surgery programs were rank-ordered according to an average percentile rank that was calculated using board pass rates and the prevalence of alumni publications. A Kendall τ-b rank correlation computed the linear association between program rankings based on reputation alone and those derived from outcomes measures to validate whether reputation was a reasonable surrogate for globally judging program quality. For the 218 programs with complete data eligible for analysis, the mean board pass rate was 72% with a standard deviation of 14%. A total of 60 programs were placed in the 75th percentile or above for the number of publications authored by program alumni. The correlational analysis reported a significant correlation of 0.428, indicating only a moderate association between programs ranked by outcomes measures and those ranked according to reputation. Seventeen programs that were ranked in the top 30 according to reputation were also ranked in the top 30 based on outcomes measures. This study suggests that reputation alone does not fully capture a representative snapshot of a program's quality. Rather, the use of multiple quantifiable indicators and attributes unique to programs ought to be given more consideration when assigning ranks to denote program quality. It is advised that the interpretation and subsequent use of program rankings be met with caution until further studies can rigorously demonstrate best practices for awarding program standings. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Timetabling an Academic Department with Linear Programming.
ERIC Educational Resources Information Center
Bezeau, Lawrence M.
This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Applications of Goal Programming to Education.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph A.; And Others
This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
Shastry, Shamee; Ramya, B; Ninan, Jefy; Srinidhi, G C; Bhat, Sudha S; Fernandes, Donald J
2013-12-01
The dedicated devices for blood irradiation are available only at a few centers in developing countries thus the irradiation remains a service with limited availability due to prohibitive cost. To implement a blood irradiation program at our center using linear accelerator. The study is performed detailing the specific operational and quality assurance measures employed in providing a blood component-irradiation service at tertiary care hospital. X-rays generated from linear accelerator were used to irradiate the blood components. To facilitate and standardize the blood component irradiation, a blood irradiator box was designed and fabricated in acrylic. Using Elekta Precise Linear Accelerator, a dose of 25 Gy was delivered at the centre of the irradiation box. Standardization was done using five units of blood obtained from healthy voluntary blood donors. Each unit was divided to two parts. One aliquot was subjected to irradiation. Biochemical and hematological parameters were analyzed on various days of storage. Cost incurred was analyzed. Progressive increase in plasma hemoglobin, potassium and lactate dehydrogenase was noted in the irradiated units but all the parameters were within the acceptable range indicating the suitability of the product for transfusion. The irradiation process was completed in less than 30 min. Validation of the radiation dose done using TLD showed less than ± 3% variation. This study shows that that the blood component irradiation is within the scope of most of the hospitals in developing countries even in the absence of dedicated blood irradiators at affordable cost. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
How does non-linear dynamics affect the baryon acoustic oscillation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu
2014-02-01
We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less
NASA Technical Reports Server (NTRS)
Lovegreen, J. R.; Prosser, W. J.; Millet, R. A.
1975-01-01
A site in the Great Valley subsection of the Valley and Ridge physiographic province in eastern Pennsylvania was studied to evaluate the use of digital and analog image processing for geologic investigations. Ground truth at the site was obtained by a field mapping program, a subsurface exploration investigation and a review of available published and unpublished literature. Remote sensing data were analyzed using standard manual techniques. LANDSAT-1 imagery was analyzed using digital image processing employing the multispectral Image 100 system and using analog color processing employing the VP-8 image analyzer. This study deals primarily with linears identified employing image processing and correlation of these linears with known structural features and with linears identified manual interpretation; and the identification of rock outcrops in areas of extensive vegetative cover employing image processing. The results of this study indicate that image processing can be a cost-effective tool for evaluating geologic and linear features for regional studies encompassing large areas such as for power plant siting. Digital image processing can be an effective tool for identifying rock outcrops in areas of heavy vegetative cover.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
Programming Models for Concurrency and Real-Time
NASA Astrophysics Data System (ADS)
Vitek, Jan
Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.
2013-03-30
Abstract: We study multi-robot routing problems (MR- LDR ) where a team of robots has to visit a set of given targets with linear decreasing rewards over...time, such as required for the delivery of goods to rescue sites after disasters. The objective of MR- LDR is to find an assignment of targets to...We develop a mixed integer program that solves MR- LDR optimally with a flow-type formulation and can be solved faster than the standard TSP-type
1980-10-01
faster than previous algorithms. Indeed, with only minor modifications, the standard multigrid programs solve the LCP with essentially the same efficiency... Lemna 2.2. Let Uk be the solution of the LCP (2.3), and let uk > 0 be an approximate solu- tion obtained after one or more Gk projected sweeps. Let...in Figure 3.2, Ivu IIG decreased from .293 10 to .110 10 with the expenditure of (99.039-94.400) = 4.639 work units. While minor variations do arise, a
Henry, B I; Langlands, T A M; Wearne, S L
2006-09-01
We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
Gullo, Charles A.
2016-01-01
Biomedical programs have a potential treasure trove of data they can mine to assist admissions committees in identification of students who are likely to do well and help educational committees in the identification of students who are likely to do poorly on standardized national exams and who may need remediation. In this article, we provide a step-by-step approach that schools can utilize to generate data that are useful when predicting the future performance of current students in any given program. We discuss the use of linear regression analysis as the means of generating that data and highlight some of the limitations. Finally, we lament on how the combination of these institution-specific data sets are not being fully utilized at the national level where these data could greatly assist programs at large. PMID:27374246
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Chester J
Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less
Heegaard, P M; Holm, A; Hagerup, M
1993-01-01
A personal computer program for the conversion of linear amino acid sequences to multiple, small, overlapping peptide sequences has been developed. Peptide lengths and "jumps" (the distance between two consecutive overlapping peptides) are defined by the user. To facilitate the use of the program for parallel solid-phase chemical peptide syntheses for the synchronous production of multiple peptides, amino acids at each acylation step are laid out by the program in a convenient standard multi-well setup. Also, the total number of equivalents, as well as the derived amount in milligrams (depend-ending on user-defined equivalent weights and molar surplus), of each amino acid are given. The program facilitates the implementation of multipeptide synthesis, e.g., for the elucidation of polypeptide structure-function relationships, and greatly reduces the risk of introducing mistakes at the planning step. It is written in Pascal and runs on any DOS-based personal computer. No special graphic display is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundy, D; Tryggestad, E; Beltran, C
Purpose: To develop daily and monthly quality assurance (QA) programs in support of a new spot-scanning proton treatment facility using a combination of commercial and custom equipment and software. Emphasis was placed on efficiency and evaluation of key quality parameters. Methods: The daily QA program was developed to test output, spot size and position, proton beam energy, and image guidance using the Sun Nuclear Corporation rf-DQA™3 device and Atlas QA software. The program utilizes standard Atlas linear accelerator tests repurposed for proton measurements and a custom jig for indexing the device to the treatment couch. The monthly QA program wasmore » designed to test mechanical performance, image quality, radiation quality, isocenter coincidence, and safety features. Many of these tests are similar to linear accelerator QA counterparts, but many require customized test design and equipment. Coincidence of imaging, laser marker, mechanical, and radiation isocenters, for instance, is verified using a custom film-based device devised and manufactured at our facility. Proton spot size and position as a function of energy are verified using a custom spot pattern incident on film and analysis software developed in-house. More details concerning the equipment and software developed for monthly QA are included in the supporting document. Thresholds for daily and monthly tests were established via perturbation analysis, early experience, and/or proton system specifications and associated acceptance test results. Results: The periodic QA program described here has been in effect for approximately 9 months and has proven efficient and sensitive to sub-clinical variations in treatment delivery characteristics. Conclusion: Tools and professional guidelines for periodic proton system QA are not as well developed as their photon and electron counterparts. The program described here efficiently evaluates key quality parameters and, while specific to the needs of our facility, could be readily adapted to other proton centers.« less
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
NASA Astrophysics Data System (ADS)
Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao
2017-12-01
Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Ajtony, Zsolt; Laczai, Nikoletta; Dravecz, Gabriella; Szoboszlai, Norbert; Marosi, Áron; Marlok, Bence; Streli, Christina; Bencs, László
2016-12-15
HR-CS-GFAAS methods were developed for the fast determination of Cu in domestic and commercially available Hungarian distilled alcoholic beverages (called pálinka), in order to decide if their Cu content exceeds the permissible limit, as legislated by the WHO. Some microliters of samples were directly dispensed into the atomizer. Graphite furnace heating programs, effects/amounts of the Pd modifier, alternative wavelengths (e.g., Cu I 249.2146nm), external calibration and internal standardization methods were studied. Applying a fast graphite furnace heating program without any chemical modifier, the Cu content of a sample could be quantitated within 1.5min. The detection limit of the method is 0.03mg/L. Calibration curves are linear up to 10-15mg/L Cu. Spike-recoveries ranged from 89% to 119% with an average of 100.9±8.5%. Internal calibration could be applied with the assistance of Cr, Fe, and/or Rh standards. The accuracy of the GFAAS results was verified by TXRF analyses. Copyright © 2016 Elsevier Ltd. All rights reserved.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
High profile students’ growth of mathematical understanding in solving linier programing problems
NASA Astrophysics Data System (ADS)
Utomo; Kusmayadi, TA; Pramudya, I.
2018-04-01
Linear program has an important role in human’s life. This linear program is learned in senior high school and college levels. This material is applied in economy, transportation, military and others. Therefore, mastering linear program is useful for provision of life. This research describes a growth of mathematical understanding in solving linear programming problems based on the growth of understanding by the Piere-Kieren model. Thus, this research used qualitative approach. The subjects were students of grade XI in Salatiga city. The subjects of this study were two students who had high profiles. The researcher generally chose the subjects based on the growth of understanding from a test result in the classroom; the mark from the prerequisite material was ≥ 75. Both of the subjects were interviewed by the researcher to know the students’ growth of mathematical understanding in solving linear programming problems. The finding of this research showed that the subjects often folding back to the primitive knowing level to go forward to the next level. It happened because the subjects’ primitive understanding was not comprehensive.
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
NASA Technical Reports Server (NTRS)
Egebrecht, R. A.; Thorbjornsen, A. R.
1967-01-01
Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.
Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Allen, Michael J.
2007-01-01
Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew; Allen, Michael J.
2005-01-01
Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Buckley, Elaine Jayne; Markwell, Stephen; Farr, Debb; Sanfey, Hilary; Mellinger, John
2015-10-01
American Board of Surgery In-Service Training Examination (ABSITE) scores are used to assess individual progress and predict board pass rates. We reviewed strategies to enhance ABSITE performance and their impact within a surgery residency. Several interventions were introduced from 2010 to 2014. A retrospective review was undertaken evaluating these and correlating them to ABSITE performance. Analyses of variance and linear trends were performed for ABSITE, United States Medical Licensing Examination (USMLEs), mock oral, and mock ABSITE scores followed by post hoc analyses if significant. Results were correlated with core curricular changes. ABSITE mean percentile increased 34% in 4 years with significant performance improvement and increasing linear trends in postgraduate year (PGY)1 and PGY4 ABSITE scores. Mock ABSITE introduction correlated to significant improvement in ABSITE scores for PGY4 and PGY5. Mock oral introduction correlated with significant improvement in PGY1 and PGY3. Our study demonstrates an improvement in mean program ABSITE percentiles correlating with multiple interventions. Similar strategies may be useful for other programs. Copyright © 2015 Elsevier Inc. All rights reserved.
Ohno, Hajime; Matsubae, Kazuyo; Nakajima, Kenichi; Kondo, Yasushi; Nakamura, Shinichiro; Fukushima, Yasuhiro; Nagasaka, Tetsuya
2017-11-21
Importance of end-of-life vehicles (ELVs) as an urban mine is expected to grow, as more people in developing countries are experiencing increased standards of living, while the automobiles are increasingly made using high-quality materials to meet stricter environmental and safety requirements. While most materials in ELVs, particularly steel, have been recycled at high rates, quality issues have not been adequately addressed due to the complex use of automobile materials, leading to considerable losses of valuable alloying elements. This study highlights the maximal potential of quality-oriented recycling of ELV steel, by exploring the utilization methods of scrap, sorted by parts, to produce electric-arc-furnace-based crude alloy steel with minimal losses of alloying elements. Using linear programming on the case of Japanese economy in 2005, we found that adoption of parts-based scrap sorting could result in the recovery of around 94-98% of the alloying elements occurring in parts scrap (manganese, chromium, nickel, and molybdenum), which may replace 10% of the virgin sources in electric arc furnace-based crude alloy steel production.
Improved Linear-Ion-Trap Frequency Standard
NASA Technical Reports Server (NTRS)
Prestage, John D.
1995-01-01
Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
NASA Technical Reports Server (NTRS)
Folta, David C.; Carpenter, J. Russell
1999-01-01
A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.
Implementing Nonlinear Feedback Controllers Using DNA Strand Displacement Reactions.
Sawlekar, Rucha; Montefusco, Francesco; Kulkarni, Vishwesh V; Bates, Declan G
2016-07-01
We show how an important class of nonlinear feedback controllers can be designed using idealized abstract chemical reactions and implemented via DNA strand displacement (DSD) reactions. Exploiting chemical reaction networks (CRNs) as a programming language for the design of complex circuits and networks, we show how a set of unimolecular and bimolecular reactions can be used to realize input-output dynamics that produce a nonlinear quasi sliding mode (QSM) feedback controller. The kinetics of the required chemical reactions can then be implemented as enzyme-free, enthalpy/entropy driven DNA reactions using a toehold mediated strand displacement mechanism via Watson-Crick base pairing and branch migration. We demonstrate that the closed loop response of the nonlinear QSM controller outperforms a traditional linear controller by facilitating much faster tracking response dynamics without introducing overshoots in the transient response. The resulting controller is highly modular and is less affected by retroactivity effects than standard linear designs.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew
2009-11-01
Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.
Working Group Report: Higgs Boson
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Sally; Gritsan, Andrei; Logan, Heather
2013-10-30
This report summarizes the work of the Energy Frontier Higgs Boson working group of the 2013 Community Summer Study (Snowmass). We identify the key elements of a precision Higgs physics program and document the physics potential of future experimental facilities as elucidated during the Snowmass study. We study Higgs couplings to gauge boson and fermion pairs, double Higgs production for the Higgs self-coupling, its quantum numbers and $CP$-mixing in Higgs couplings, the Higgs mass and total width, and prospects for direct searches for additional Higgs bosons in extensions of the Standard Model. Our report includes projections of measurement capabilities frommore » detailed studies of the Compact Linear Collider (CLIC), a Gamma-Gamma Collider, the International Linear Collider (ILC), the Large Hadron Collider High-Luminosity Upgrade (HL-LHC), Very Large Hadron Colliders up to 100 TeV (VLHC), a Muon Collider, and a Triple-Large Electron Positron Collider (TLEP).« less
The Healthy Mind, Healthy Mobility Trial: A Novel Exercise Program for Older Adults.
Gill, Dawn P; Gregory, Michael A; Zou, Guangyong; Liu-Ambrose, Teresa; Shigematsu, Ryosuke; Hachinski, Vladimir; Fitzgerald, Clara; Petrella, Robert J
2016-02-01
More evidence is needed to conclude that a specific program of exercise and/or cognitive training warrants prescription for the prevention of cognitive decline. We examined the effect of a group-based standard exercise program for older adults, with and without dual-task training, on cognitive function in older adults without dementia. We conducted a proof-of-concept, single-blinded, 26-wk randomized controlled trial whereby participants recruited from preexisting exercise classes at the Canadian Centre for Activity and Aging in London, Ontario, were randomized to the intervention group (exercise + dual-task [EDT]) or the control group (exercise only [EO]). Each week (2 or 3 d · wk(-1)), both groups accumulated a minimum of 50 min of aerobic exercise (target 75 min) from standard group classes and completed 45 min of beginner-level square-stepping exercise. The EDT group was also required to answer cognitively challenging questions while doing beginner-level square-stepping exercise (i.e., dual-task training). The effect of interventions on standardized global cognitive function (GCF) scores at 26 wk was compared between the groups using the linear mixed effects model approach. Participants (n = 44; 68% female; mean [SD] age: 73.5 [7.2] yr) had on average, objective evidence of cognitive impairment (Montreal Cognitive Assessment scores, mean [SD]: 24.9 [1.9]) but not dementia (Mini-Mental State Examination scores, mean [SD]: 28.8 [1.2]). After 26 wk, the EDT group showed greater improvement in GCF scores compared with the EO group (difference between groups in mean change [95% CI]: 0.20 SD [0.01-0.39], P = 0.04). A 26-wk group-based exercise program combined with dual-task training improved GCF in community-dwelling older adults without dementia.
Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.
Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia
2016-01-01
To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.
An Optimized Method for the Measurement of Acetaldehyde by High-Performance Liquid Chromatography
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2011-01-01
Background Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase, and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). Methods We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent,, time and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DPN) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison to AcH-DPN standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Results Derivatization of acetaldehyde was performed at pH 4.0 with a 80-fold molar excess of DNPH. The reaction was completed in 40 min at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-min chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media, and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. Conclusions An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small volume sampling of culture media and biological fluids. PMID:21895715
An optimized method for the measurement of acetaldehyde by high-performance liquid chromatography.
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2012-03-01
Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood, and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent, time, and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DNP) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison with AcH-DNP standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Derivatization of acetaldehyde was performed at pH 4.0 with an 80-fold molar excess of DNPH. The reaction was completed in 40 minutes at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-minute chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small-volume sampling of culture media and biological fluids. Copyright © 2011 by the Research Society on Alcoholism.
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Timber management planning with timber ram and goal programming
Richard C. Field
1978-01-01
By using goal programming to enhance the linear programming of Timber RAM, multiple decision criteria were incorporated in the timber management planning of a National Forest in the southeastern United States. Combining linear and goal programming capitalizes on the advantages of the two techniques and produces operationally feasible solutions. This enhancement may...
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
An Occupational Performance Test Validation Program for Fire Fighters at the Kennedy Space Center
NASA Technical Reports Server (NTRS)
Schonfeld, Brian R.; Doerr, Donald F.; Convertino, Victor A.
1990-01-01
We evaluated performance of a modified Combat Task Test (CTT) and of standard fitness tests in 20 male subjects to assess the prediction of occupational performance standards for Kennedy Space Center fire fighters. The CTT consisted of stair-climbing, a chopping simulation, and a victim rescue simulation. Average CTT performance time was 3.61 +/- 0.25 min (SEM) and all CTT tasks required 93% to 97% maximal heart rate. By using scores from the standard fitness tests, a multiple linear regression model was fitted to each parameter: the stairclimb (r(exp 2) = .905, P less than .05), the chopping performance time (r(exp 2) = .582, P less than .05), the victim rescue time (r(exp 2) = .218, P = not significant), and the total performance time (r(exp 2) = .769, P less than .05). Treadmill time was the predominant variable, being the major predictor in two of four models. These results indicated that standardized fitness tests can predict performance on some CTT tasks and that test predictors were amenable to exercise training.
Solution of the Generalized Noah's Ark Problem.
Billionnet, Alain
2013-01-01
The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.
NASA Astrophysics Data System (ADS)
Kondayya, Gundra; Shukla, Alok
2012-03-01
Pariser-Parr-Pople (P-P-P) model Hamiltonian is employed frequently to study the electronic structure and optical properties of π-conjugated systems. In this paper we describe a Fortran 90 computer program which uses the P-P-P model Hamiltonian to solve the Hartree-Fock (HF) equation for infinitely long, one-dimensional, periodic, π-electron systems. The code is capable of computing the band structure, as also the linear optical absorption spectrum, by using the tight-binding and the HF methods. Furthermore, using our program the user can solve the HF equation in the presence of a finite external electric field, thereby, allowing the simulation of gated systems. We apply our code to compute various properties of polymers such as trans-polyacetylene, poly- para-phenylene, and armchair and zigzag graphene nanoribbons, in the infinite length limit. Program summaryProgram title: ppp_bulk.x Catalogue identifier: AEKW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 464 No. of bytes in distributed program, including test data, etc.: 2 046 933 Distribution format: tar.gz Programming language: Fortran 90 Computer: PCs and workstations Operating system: Linux, Code was developed and tested on various recent versions of 64-bit Fedora including Fedora 14 (kernel version 2.6.35.12-90). Classification: 7.3 External routines: This program needs to link with LAPACK/BLAS libraries compiled with the same compiler as the program. For the Intel Fortran Compiler we used the ACML library version 4.4.0, while for the gfortran compiler we used the libraries supplied with the Fedora distribution. Nature of problem: The electronic structure of one-dimensional periodic π-conjugated systems is an intense area of research at present because of the tremendous interest in the physics of conjugated polymers and graphene nanoribbons. The computer program described in this paper provides an efficient way of solving the Hartree-Fock equations for such systems within the P-P-P model. In addition to the Bloch orbitals, band structure, and the density of states, the program can also compute quantities such as the linear absorption spectrum, and the electro-absorption spectrum of these systems. Solution method: For a one-dimensional periodic π-conjugated system lying in the xy-plane, the single-particle Bloch orbitals are expressed as linear combinations of p-orbitals of individual atoms. Then using various parameters defining the P-P-P Hamiltonian, the Hartree-Fock equations are set up as a matrix eigenvalue problem in the k-space. Thereby, its solutions are obtained in a self-consistent manner, using the iterative diagonalizing technique at several k points. The band structure and the corresponding Bloch orbitals thus obtained are used to perform a variety of calculations such as the density of states, linear optical absorption spectrum, electro-absorption spectrum, etc. Running time: Most of the examples provided take only a few seconds to run. For a large system, however, depending on the system size, the run time may be a few minutes to a few hours.
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
ERIC Educational Resources Information Center
Matzke, Orville R.
The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
Generating Linear Equations Based on Quantitative Reasoning
ERIC Educational Resources Information Center
Lee, Mi Yeon
2017-01-01
The Common Core's Standards for Mathematical Practice encourage teachers to develop their students' ability to reason abstractly and quantitatively by helping students make sense of quantities and their relationships within problem situations. The seventh-grade content standards include objectives pertaining to developing linear equations in…
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
NASA Astrophysics Data System (ADS)
İnkaya, Ersin; Günnaz, Salih; Özdemir, Namık; Dayan, Osman; Dinçer, Muharrem; Çetinkaya, Bekir
2013-02-01
The title molecule, 2,6-bis(1-benzyl-1H-benzo[d]imidazol-2-yl)pyridine (C33H25N5), was synthesized and characterized by elemental analysis, FT-IR spectroscopy, one- and two-dimensional NMR spectroscopies, and single-crystal X-ray diffraction. In addition, the molecular geometry, vibrational frequencies and gauge-independent atomic orbital (GIAO) 1H and 13C NMR chemical shift values of the title compound in the ground state have been calculated using the density functional theory at the B3LYP/6-311G(d,p) level, and compared with the experimental data. The complete assignments of all vibrational modes were performed by potential energy distributions using VEDA 4 program. The geometrical parameters of the optimized structure are in good agreement with the X-ray crystallographic data, and the theoretical vibrational frequencies and GIAO 1H and 13C NMR chemical shifts show good agreement with experimental values. Besides, molecular electrostatic potential (MEP) distribution, frontier molecular orbitals (FMO) and non-linear optical properties of the title compound were investigated by theoretical calculations at the B3LYP/6-311G(d,p) level. The linear polarizabilities and first hyper polarizabilities of the molecule indicate that the compound is a good candidate of nonlinear optical materials. The thermodynamic properties of the compound at different temperatures were calculated, revealing the correlations between standard heat capacity, standard entropy, standard enthalpy changes and temperatures.
The XXth International Workshop High Energy Physics and Quantum Field Theory
NASA Astrophysics Data System (ADS)
The Workshop continues a series of workshops started by the Skobeltsyn Institute of Nuclear Physics of Lomonosov Moscow State University (SINP MSU) in 1985 and conceived with the purpose of presenting topics of current interest and providing a stimulating environment for scientific discussion on new developments in theoretical and experimental high energy physics and physical programs for future colliders. Traditionally the list of workshop attendees includes a great number of active young scientists and students from Russia and other countries. This year Workshop is organized jointly by the SINP MSU and the Southern Federal University (SFedU) and will take place in the holiday hotel "Luchezarniy" (Effulgent) situated on the Black Sea shore in a picturesque natural park in the suburb of the largest Russian resort city Sochi - the host city of the XXII Olympic Winter Games to be held in 2014. The main topics to be covered are: Experimental results from the LHC. Tevatron summary: the status of the Standard Model and the boundaries on BSM physics. Future physics at Linear Colliders and super B-factories. Extensions of the Standard Model and their phenomenological consequences at the LHC and Linear Colliders: SUSY extensions of the Standard Model; particle interactions in space-time with extra dimensions; strings, quantum groups and new ideas from modern algebra and geometry. Higher order corrections and resummations for collider phenomenology. Automatic calculations of Feynman diagrams and Monte Carlo simulations. LHC/LC and astroparticle/cosmology connections. Modern nuclear physics and relativistic nucleous-nucleous collisions.
Protection of Workers and Third Parties during the Construction of Linear Structures
NASA Astrophysics Data System (ADS)
Vlčková, Jitka; Venkrbec, Václav; Henková, Svatava; Chromý, Adam
2017-12-01
The minimization of risk in the workplace through a focus on occupational health and safety (OHS) is one of the primary objectives for every construction project. The most serious accidents in the construction industry occur during work on earthworks and linear structures. The character of such structures places them among those posing the greatest threat to the public (referred to as “third parties”). They can be characterized as large structures whose construction may involve the building site extending in a narrow lane alongside previously constructed objects currently in use by the public. Linear structures are often directly connected to existing objects or buildings, making it impossible to guard the whole construction site. However, many OHS problems related to linear structures can be prevented during the design stage. The aim of this article is to introduce a new methodology which has been implemented into a computer program that deals with safety measures at construction sites where work is performed on linear structures. Based on existing experience with the design of such structures and their execution and supervision by safety coordinators, the basic types of linear structures, their location in the terrain, the conditions present during their execution and other marginal conditions and influences were modelled. Basic safety information has been assigned to this elementary information, which is strictly necessary for the construction process. The safety provisions can be grouped according to type, e.g. technical, organizational and other necessary documentation, or into sets of provisions concerning areas such as construction site safety, transport safety, earthworks safety, etc. The selection of the given provisions takes place using multiple criteria. The aim of creating this program is to provide a practical tool for designers, contractors and construction companies. The model can contribute to the sufficient awareness of these participants about technical and organizational provisions that can help them to meet workplace safety requirements. The software for the selection of safety provisions also contains module that can calculate necessary cost estimates using a calculation formula chosen by the user. All software data conform to European standards harmonized for the Czech Republic.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)
NASA Technical Reports Server (NTRS)
Burns, M. R.
1979-01-01
A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.
Development of a North American paleoclimate pollen-based reconstruction database application
NASA Astrophysics Data System (ADS)
Ladd, Matthew; Mosher, Steven; Viau, Andre
2013-04-01
Recent efforts in synthesizing paleoclimate records across the globe has warranted an effort to standardize the different paleoclimate archives currently available in order to facilitate data-model comparisons and hence improve our estimates of future climate change. It is often the case that the methodology and programs make it challenging for other researchers to reproduce the results for a reconstruction, therefore there is a need for to standardize paleoclimate reconstruction databases in an application specific to proxy data. Here we present a methodology using the open source R language using North American pollen databases (e.g. NAPD, NEOTOMA) where this application can easily be used to perform new reconstructions and quickly analyze and output/plot the data. The application was developed to easily test methodological and spatial/temporal issues that might affect the reconstruction results. The application allows users to spend more time analyzing and interpreting results instead of on data management and processing. Some of the unique features of this R program are the two modules each with a menu making the user feel at ease with the program, the ability to use different pollen sums, select one of 70 climate variables available, substitute an appropriate modern climate dataset, a user-friendly regional target domain, temporal resolution criteria, linear interpolation and many other features for a thorough exploratory data analysis. The application program will be available for North American pollen-based reconstructions and eventually be made available as a package through the CRAN repository by late 2013.
Team Training in the Perioperative Arena: A Methodology for Implementation and Auditing Behavior.
Rhee, Amanda J; Valentin-Salgado, Yessenia; Eshak, David; Feldman, David; Kischak, Pat; Reich, David L; LoPachin, Vicki; Brodman, Michael
Preventable medical errors in the operating room are most often caused by ineffective communication and suboptimal team dynamics. TeamSTEPPS is a government-funded, evidence-based program that provides tools and education to improve teamwork in medicine. The study hospital implemented TeamSTEPPS in the operating room and merged the program with a surgical safety checklist. Audits were performed to collect both quantitative and qualitative information on time out (brief) and debrief conversations, using a standardized audit tool. A total of 1610 audits over 6 months were performed by live auditors. Performance was sustained at desired levels or improved for all qualitative metrics using χ 2 and linear regression analyses. Additionally, the absolute number of wrong site/side/person surgery and unintentionally retained foreign body counts decreased after TeamSTEPPS implementation.
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
[Determination of 25 quinolones in cosmetics by liquid chromatography-tandem mass spectrometry].
Lin, Li; Zhang, Yi; Tu, Xiaoke; Xie, Liqi; Yue, Zhenfeng; Kang, Haining; Wu, Weidong; Luo, Yao
2015-03-01
An analytical method was developed for the simultaneous determination of 25 quinolones, including danofloxacin mesylate, enrofloxacin, flumequine, oxloinic acid, ciprofloxacin, sarafloxacin, nalidixic acid, norfloxacin, and ofloxacin etc in cosmetics using direct extraction and liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Cosmetic sample was extracted by acidified acetonitrile, defatted by n-hexane and separated on Poroshell EC-C18 column with gradient elution program using acetonitrile and water (both containing 0. 1% formic acid) as the mobile phases and analyzed by LC-ESI-MS/MS under the positive mode using multiple reaction monitoring (MRM). The interference of matrix was reduced by the matrix-matched calibration standard curve. The method showed good linearities over the range of 1-200 mg/kg for the 25 quinolones with good linear correlation coefficients (r ≥ 0.999). The method detection limit of the 25 quinolones was 1.0 mg/kg, and the recoveries of all analytes in lotion, milky and cream cosmetics matrices ranged from 87.4% to 105% at the spiked levels of 1, 5 and 10 mg/kg with the relative standard deviations (RSD) of 4.54%-19.7% (n = 6). The results indicated that this method is simple, fast and credible, and suitable for the simultaneous determination of the quinolones in the above three types of cosmetics.
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
Ryan, Kelsey N; Adams, Katherine P; Vosti, Stephen A; Ordiz, M Isabel; Cimo, Elizabeth D; Manary, Mark J
2014-12-01
Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed international and national crop and animal food databases was used to create a global and local candidate ingredient database. The database included information about each ingredient regarding nutrient composition, ingredient category, regional availability, and food safety, processing, and price. An LP tool was then designed to compose novel RUTF formulations. For the example case of Ethiopia, the objective was to minimize the ingredient cost of RUTF; the decision variables were ingredient weights and the extent of use of locally available ingredients, and the constraints were nutritional and product-quality related. Of the new RUTF formulations found by the LP tool for Ethiopia, 32 were predicted to be feasible for creating a paste, and these were prepared in the laboratory. Palatable final formulations contained a variety of ingredients, including fish, different dairy powders, and various seeds, grains, and legumes. Nearly all of the macronutrient values calculated by the LP tool differed by <10% from results produced by laboratory analyses, but the LP tool consistently underestimated total energy. The LP tool can be used to develop new RUTF formulations that make more use of locally available ingredients. This tool has the potential to lead to production of a variety of low-cost RUTF formulations that meet international standards and thereby potentially allow more children to be treated for SAM. © 2014 American Society for Nutrition.
Talsma, Elise F; Borgonjen-van den Berg, Karin J; Melse-Boonstra, Alida; Mayer, Eva V; Verhoef, Hans; Demir, Ayşe Y; Ferguson, Elaine L; Kok, Frans J; Brouwer, Inge D
2018-02-01
Introduction of biofortified cassava as school lunch can increase vitamin A intake, but may increase risk of other deficiencies due to poor nutrient profile of cassava. We assessed the potential effect of introducing a yellow cassava-based school lunch combined with additional food-based recommendations (FBR) on vitamin A and overall nutrient adequacy using Optifood (linear programming tool). Cross-sectional study to assess dietary intakes (24 h recall) and derive model parameters (list of foods consumed, median serving sizes, food and food (sub)group frequency distributions, food cost). Three scenarios were modelled, namely daily diet including: (i) no school lunch; (ii) standard 5d school lunch with maize/beans; and (iii) 5d school lunch with yellow cassava. Each scenario and scenario 3 with additional FBR were assessed on overall nutrient adequacy using recommended nutrient intakes (RNI). Eastern Kenya. Primary-school children (n 150) aged 7-9 years. Best food pattern of yellow cassava-based lunch scenario achieved 100 % RNI for six nutrients compared with no lunch (three nutrients) or standard lunch (five nutrients) scenario. FBR with yellow cassava and including small dried fish improved nutrient adequacy, but could not ensure adequate intake of fat (52 % of average requirement), riboflavin (50 % RNI), folate (59 % RNI) and vitamin A (49 % RNI). Introduction of yellow cassava-based school lunch complemented with FBR potentially improved vitamin A adequacy, but alternative interventions are needed to ensure dietary adequacy. Optifood is useful to assess potential contribution of a biofortified crop to nutrient adequacy and to develop additional FBR to address remaining nutrient gaps.
Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter
NASA Technical Reports Server (NTRS)
Perry, Radford
2011-01-01
The particle fallout limitations and periodic allocations for the James Webb Space Telescope are very stringent. Standard prediction methods are complicated by non-linearity and monitoring methods that are insufficiently responsive. A method for dynamically predicting the particle fallout in a cleanroom using air particle counter data was determined by numerical correlation. This method provides a simple linear correlation to both time and air quality, which can be monitored in real time. The summation of effects provides the program better understanding of the cleanliness and assists in the planning of future activities. Definition of fallout rates within a cleanroom during assembly and integration of contamination-sensitive hardware, such as the James Webb Space Telescope, is essential for budgeting purposes. Balancing the activity levels for assembly and test with the particle accumulation rate is paramount. The current approach to predicting particle fallout in a cleanroom assumes a constant air quality based on the rated class of a cleanroom, with adjustments for projected work or exposure times. Actual cleanroom class can also depend on the number of personnel present and the type of activities. A linear correlation of air quality and normalized particle fallout was determined numerically. An air particle counter (standard cleanroom equipment) can be used to monitor the air quality on a real-time basis and determine the "class" of the cleanroom (per FED-STD-209 or ISO-14644). The correlation function provides an area coverage coefficient per class-hour of exposure. The prediction of particle accumulations provides scheduling inputs for activity levels and cleanroom class requirements.
ERIC Educational Resources Information Center
Smith, Karan B.
1996-01-01
Presents activities which highlight major concepts of linear programming. Demonstrates how technology allows students to solve linear programming problems using exploration prior to learning algorithmic methods. (DDR)
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Advancing School and Community Engagement Now for Disease Prevention (ASCEND).
Treu, Judith A; Doughty, Kimberly; Reynolds, Jesse S; Njike, Valentine Y; Katz, David L
2017-03-01
To compare two intensity levels (standard vs. enhanced) of a nutrition and physical activity intervention vs. a control (usual programs) on nutrition knowledge, body mass index, fitness, academic performance, behavior, and medication use among elementary school students. Quasi-experimental with three arms. Elementary schools, students' homes, and a supermarket. A total of 1487 third-grade students. The standard intervention (SI) provided daily physical activity in classrooms and a program on making healthful foods, using food labels. The enhanced intervention (EI) provided these plus additional components for students and their families. Body mass index (zBMI), food label literacy, physical fitness, academic performance, behavior, and medication use for asthma or attention-deficit hyperactivity disorder (ADHD). Multivariable generalized linear model and logistic regression to assess change in outcome measures. Both the SI and EI groups gained less weight than the control (p < .001), but zBMI did not differ between groups (p = 1.00). There were no apparent effects on physical fitness or academic performance. Both intervention groups improved significantly but similarly in food label literacy (p = .36). Asthma medication use was reduced significantly in the SI group, and nonsignificantly (p = .10) in the EI group. Use of ADHD medication remained unchanged (p = .34). The standard intervention may improve food label literacy and reduce asthma medication use in elementary school children, but an enhanced version provides no further benefit.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Vicente de Sousa, Odete; Soares Guerra, Rita; Sousa, Ana Sofia; Pais Henriques, Bebiana; Pereira Monteiro, Anabela; Amaral, Teresa Freitas
2017-09-01
This study aims to evaluate the impact of oral nutritional supplementation (ONS) and a psychomotor rehabilitation program on nutritional and functional status of community-dwelling patients with Alzheimer's disease (AD). A 21-day prospective randomized controlled trial was conducted and third intervention group performed a psychomotor rehabilitation program. Patients were followed up for 180 days. Mean (standard deviation) score of Mini Nutritional Assessment (MNA) increased both in the nutritional supplementation group (NSG; n = 25), 0.4 (0.8), and in the nutritional supplementation psychomotor rehabilitation program group (NSPRG; n = 11), 1.5 (1.0), versus -0.1 (1.1) in the control group (CG; n = 43), P < .05. Further improvements at 90-day follow-up for MNA in NSG: 1.3 (1.2) and NSPRG: 1.6 (1.0) versus 0.3 (1.7) in CG ( P < .05) were observed. General linear model analysis showed that the NSG and NSPRG ▵MNA score improved after intervention, at 21 days and 90 days, was independent of the MNA and Mini-Mental State Examination scores at baseline ( Ps > .05). The ONS and a psychomotor rehabilitation program have a positive impact on long-term nutritional and functional status of patients with AD.
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
BIODEGRADATION PROBABILITY PROGRAM (BIODEG)
The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
NASA Astrophysics Data System (ADS)
Mazzie, Dawn Danielle
This study investigated the relationship between students' standardized test scores in science and (a) increases in teacher assessment literacy and (b) teacher participation in a Teacher Quality Research (TQR) project on classroom assessment. The samples for these studies were teachers from underperforming schools who volunteered to take part in a professional development program in classroom assessment. School groups were randomly assigned to the treatment group. For Study 1, teachers in the treatment received professional development in classroom assessment from a trained assessment coach. Teachers in the control received no professional development. For Study 2, teachers in Treatment 1 received professional development in classroom assessment from a trained assessment coach and teachers in Treatment 2 received professional development in classroom assessment from a facilitator with one day of training. Teachers in both groups completed a measure of assessment literacy, the Teacher Quality Research Test of Assessment Literacy Skills (TQR_TALS), prior to the beginning and then again at the conclusion of the four month professional development program. A hierarchical linear model (HLM) analysis was conducted to determine the relationship between students' standardized test scores in science and (a) increases in teacher assessment literacy and (b) teacher TQR status. Based upon these analyses, the professional development program increased teachers' assessment literacy skills; however, the professional development had no significant impact on students' achievement.
A simple approach to lifetime learning in genetic programming-based symbolic regression.
Azad, Raja Muhammad Atif; Ryan, Conor
2014-01-01
Genetic programming (GP) coarsely models natural evolution to evolve computer programs. Unlike in nature, where individuals can often improve their fitness through lifetime experience, the fitness of GP individuals generally does not change during their lifetime, and there is usually no opportunity to pass on acquired knowledge. This paper introduces the Chameleon system to address this discrepancy and augment GP with lifetime learning by adding a simple local search that operates by tuning the internal nodes of individuals. Although not the first attempt to combine local search with GP, its simplicity means that it is easy to understand and cheap to implement. A simple cache is added which leverages the local search to reduce the tuning cost to a small fraction of the expected cost, and we provide a theoretical upper limit on the maximum tuning expense given the average tree size of the population and show that this limit grows very conservatively as the average tree size of the population increases. We show that Chameleon uses available genetic material more efficiently by exploring more actively than with standard GP, and demonstrate that not only does Chameleon outperform standard GP (on both training and test data) over a number of symbolic regression type problems, it does so by producing smaller individuals and it works harmoniously with two other well-known extensions to GP, namely, linear scaling and a diversity-promoting tournament selection method.
NASA Astrophysics Data System (ADS)
Thompson, Russell G.; Singleton, F. D., Jr.
1986-04-01
With the methodology recommended by Baumol and Oates, comparable estimates of wastewater treatment costs and industry outlays are developed for effluent standard and effluent tax instruments for pollution abatement in five hypothetical organic petrochemicals (olefins) plants. The computational method uses a nonlinear simulation model for wastewater treatment to estimate the system state inputs for linear programming cost estimation, following a practice developed in a National Science Foundation (Research Applied to National Needs) study at the University of Houston and used to estimate Houston Ship Channel pollution abatement costs for the National Commission on Water Quality. Focusing on best practical and best available technology standards, with effluent taxes adjusted to give nearly equal pollution discharges, shows that average daily treatment costs (and the confidence intervals for treatment cost) would always be less for the effluent tax than for the effluent standard approach. However, industry's total outlay for these treatment costs, plus effluent taxes, would always be greater for the effluent tax approach than the total treatment costs would be for the effluent standard approach. Thus the practical necessity of showing smaller outlays as a prerequisite for a policy change toward efficiency dictates the need to link the economics at the microlevel with that at the macrolevel. Aggregation of the plants into a programming modeling basis for individual sectors and for the economy would provide a sound basis for effective policy reform, because the opportunity costs of the salient regulatory policies would be captured. Then, the government's policymakers would have the informational insights necessary to legislate more efficient environmental policies in light of the wealth distribution effects.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
On the stability and instantaneous velocity of grasped frictionless objects
NASA Technical Reports Server (NTRS)
Trinkle, Jeffrey C.
1992-01-01
A quantitative test for form closure valid for any number of contact points is formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. Another contribution of the study is the formulation of a linear program whose solution yields the same information as the classical approach. The benefit of the formulation is that explicit testing of all possible combinations of contact interactions can be avoided by the algorithm used to solve the linear program.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Characteristics of an Imaging Polarimeter for the Powell Observatory
NASA Astrophysics Data System (ADS)
Hall, Shannon; Henson, G.
2010-01-01
A dual-beam imaging polarimeter has been built for use on the 14 inch Schmidt-Cassegrain telescope at the ETSU Harry D. Powell Observatory. The polarimeter includes a rotating half-wave plate and a Wollaston prism to separate light into two orthogonal linearly polarized rays. A TEC cooled CCD camera is used to detect the modulated polarized light. We present here measurements of the polarization of polarimetric standard stars. By measuring unpolarized and polarized standard stars we are able to establish the instrumental polarization and the efficiency of the instrument. The polarimeter will initially be used as a dedicated instrument in an ongoing project to monitor the eclipsing binary star, Epsilon Aurigae. This project was funded by a partnership between the National Science Foundation (NSF AST-0552798), Research Experience for Undergraduates (REU), and the Department of Defense (DoD) ASSURE (Awards to Stimulate and Support Undergraduate Research Experiences) programs.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
Quantitative histogram analysis of images
NASA Astrophysics Data System (ADS)
Holub, Oliver; Ferreira, Sérgio T.
2006-11-01
A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Computer-aided linear-circuit design.
NASA Technical Reports Server (NTRS)
Penfield, P.
1971-01-01
Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.
Planning Student Flow with Linear Programming: A Tunisian Case Study.
ERIC Educational Resources Information Center
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
NASA Astrophysics Data System (ADS)
Gorelick, Steven M.; Voss, Clifford I.; Gill, Philip E.; Murray, Walter; Saunders, Michael A.; Wright, Margaret H.
1984-04-01
A simulation-management methodology is demonstrated for the rehabilitation of aquifers that have been subjected to chemical contamination. Finite element groundwater flow and contaminant transport simulation are combined with nonlinear optimization. The model is capable of determining well locations plus pumping and injection rates for groundwater quality control. Examples demonstrate linear or nonlinear objective functions subject to linear and nonlinear simulation and water management constraints. Restrictions can be placed on hydraulic heads, stresses, and gradients, in addition to contaminant concentrations and fluxes. These restrictions can be distributed over space and time. Three design strategies are demonstrated for an aquifer that is polluted by a constant contaminant source: they are pumping for contaminant removal, water injection for in-ground dilution, and a pumping, treatment, and injection cycle. A transient model designs either contaminant plume interception or in-ground dilution so that water quality standards are met. The method is not limited to these cases. It is generally applicable to the optimization of many types of distributed parameter systems.
Black-hole kicks from numerical-relativity surrogate models
NASA Astrophysics Data System (ADS)
Gerosa, Davide; Hébert, François; Stein, Leo C.
2018-05-01
Binary black holes radiate linear momentum in gravitational waves as they merge. Recoils imparted to the black-hole remnant can reach thousands of km /s , thus ejecting black holes from their host galaxies. We exploit recent advances in gravitational waveform modeling to quickly and reliably extract recoils imparted to generic, precessing, black-hole binaries. Our procedure uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters; then, from this waveform we directly integrate the gravitational-wave linear momentum flux. This entirely bypasses the need for fitting formulas which are typically used to model black-hole recoils in astrophysical contexts. We provide a thorough exploration of the black-hole kick phenomenology in the parameter space, summarizing and extending previous numerical results on the topic. Our extraction procedure is made publicly available as a module for the Python programming language named surrkick. Kick evaluations take ˜0.1 s on a standard off-the-shelf machine, thus making our code ideal to be ported to large-scale astrophysical studies.
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
Prediction of elemental creep. [steady state and cyclic data from regression analysis
NASA Technical Reports Server (NTRS)
Davis, J. W.; Rummler, D. R.
1975-01-01
Cyclic and steady-state creep tests were performed to provide data which were used to develop predictive equations. These equations, describing creep as a function of stress, temperature, and time, were developed through the use of a least squares regression analyses computer program for both the steady-state and cyclic data sets. Comparison of the data from the two types of tests, revealed that there was no significant difference between the cyclic and steady-state creep strains for the L-605 sheet under the experimental conditions investigated (for the same total time at load). Attempts to develop a single linear equation describing the combined steady-state and cyclic creep data resulted in standard errors of estimates higher than obtained for the individual data sets. A proposed approach to predict elemental creep in metals uses the cyclic creep equation and a computer program which applies strain and time hardening theories of creep accumulation.
İnkaya, Ersin; Günnaz, Salih; Özdemir, Namık; Dayan, Osman; Dinçer, Muharrem; Çetinkaya, Bekir
2013-02-15
The title molecule, 2,6-bis(1-benzyl-1H-benzo[d]imidazol-2-yl)pyridine (C(33)H(25)N(5)), was synthesized and characterized by elemental analysis, FT-IR spectroscopy, one- and two-dimensional NMR spectroscopies, and single-crystal X-ray diffraction. In addition, the molecular geometry, vibrational frequencies and gauge-independent atomic orbital (GIAO) (1)H and (13)C NMR chemical shift values of the title compound in the ground state have been calculated using the density functional theory at the B3LYP/6-311G(d,p) level, and compared with the experimental data. The complete assignments of all vibrational modes were performed by potential energy distributions using VEDA 4 program. The geometrical parameters of the optimized structure are in good agreement with the X-ray crystallographic data, and the theoretical vibrational frequencies and GIAO (1)H and (13)C NMR chemical shifts show good agreement with experimental values. Besides, molecular electrostatic potential (MEP) distribution, frontier molecular orbitals (FMO) and non-linear optical properties of the title compound were investigated by theoretical calculations at the B3LYP/6-311G(d,p) level. The linear polarizabilities and first hyper polarizabilities of the molecule indicate that the compound is a good candidate of nonlinear optical materials. The thermodynamic properties of the compound at different temperatures were calculated, revealing the correlations between standard heat capacity, standard entropy, standard enthalpy changes and temperatures. Copyright © 2012 Elsevier B.V. All rights reserved.
Inconsistent Investment and Consumption Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
NASA Technical Reports Server (NTRS)
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
Zoellner, Jamie M; Hill, Jennie; You, Wen; Brock, Donna; Frisard, Madlyn; Alexander, Ramine; Silva, Fabiana; Price, Bryan; Marshall, Ruby; Estabrooks, Paul A
2017-09-28
Few interventions have evaluated the influence of parent health literacy (HL) status on weight-related child outcomes. This study explores how parent HL affects the reach, attendance, and retention of and outcomes in a 3-month multicomponent family-based program to treat childhood obesity (iChoose). This pre-post, quasiexperimental trial occurred in the Dan River Region, a federally designated medically underserved area. iChoose research protocol and intervention strategies were designed using an HL universal precautions approach. We used validated measures, standardized data collection techniques, and generalized linear mixed-effect parametric models to determine the moderation effect of parent HL on outcomes. No significant difference in HL scores were found between parents who enrolled their child in the study and those who did not. Of 94 enrolled parents, 34% were low HL, 49% had an annual household income of less than $25,000, and 39% had a high school education or less. Of 101 enrolled children, 60% were black, and the mean age was 9.8 (standard deviation, 1.3) years. Children of parents with both low and high HL attended and were retained at similar rates. Likewise, parent HL status did not significantly influence improvements in effectiveness outcomes (eg, child body mass index [BMI] z scores, parent BMI, diet and physical activity behaviors, quality of life), with the exception of child video game/computer screen time; low HL decreased and high HL increased screen time (coefficient = 0.52, standard error, 0.11, P < .001). By incorporating design features that attended to the HL needs of parents, children of parents with low HL engaged in and benefited from a family-based childhood obesity treatment program similar to children of parents with high HL.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Vanilla technicolor at linear colliders
NASA Astrophysics Data System (ADS)
Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco
2011-08-01
We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.
A New Pattern of Getting Nasty Number in Graphical Method
NASA Astrophysics Data System (ADS)
Sumathi, P.; Indhumathi, N.
2018-04-01
This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.
NASA Astrophysics Data System (ADS)
Pradanti, Paskalia; Hartono
2018-03-01
Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.
SPAR reference manual. [for stress analysis
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1974-01-01
SPAR is a system of related programs which may be operated either in batch or demand (teletype) mode. Information exchange between programs is automatically accomplished through one or more direct access libraries, known collectively as the data complex. Card input is command-oriented, in free-field form. Capabilities available in the first production release of the system are fully documented, and include linear stress analysis, linear bifurcation buckling analysis, and linear vibrational analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standardization program developed under authority of the Federal Property and Administrative Services Act of 1949..., Standards and Commercial Item Description Program (Federal Standardization Program). 101-29.221 Section 101...-Definitions § 101-29.221 Federal Specifications, Standards and Commercial Item Description Program (Federal...
Code of Federal Regulations, 2011 CFR
2011-07-01
... standardization program developed under authority of the Federal Property and Administrative Services Act of 1949..., Standards and Commercial Item Description Program (Federal Standardization Program). 101-29.221 Section 101...-Definitions § 101-29.221 Federal Specifications, Standards and Commercial Item Description Program (Federal...
ERIC Educational Resources Information Center
Leff, H. Stephen; Turner, Ralph R.
This report focuses on the use of linear programming models to address the issues of how vocational rehabilitation (VR) resources should be allocated in order to maximize program efficiency within given resource constraints. A general introduction to linear programming models is first presented that describes the major types of models available,…
Ni, Meng; Mooney, Kiersten; Richards, Luca; Balachandran, Anoop; Sun, Mingwei; Harriell, Kysha; Potiaumpai, Melanie; Signorile, Joseph F
2014-09-01
To compare the effect of a custom-designed yoga program with 2 other balance training programs. Randomized controlled trial. Research laboratory. A group of older adults (N=39; mean age, 74.15 ± 6.99 y) with a history of falling. Three different exercise interventions (Tai Chi, standard balance training, yoga) were given for 12 weeks. Balance performance was examined during pre- and posttest using field tests, including the 8-foot up-and-go test, 1-leg stance, functional reach, and usual and maximal walking speed. The static and dynamic balances were also assessed by postural sway and dynamic posturography, respectively. Training produced significant improvements in all field tests (P<.005), but group difference and time × group interaction were not detected. For postural sway, significant decreases in the area of the center of pressure with eyes open (P=.001) and eyes closed (P=.002) were detected after training. For eyes open, maximum medial-lateral velocity significantly decreased for the sample (P=.013). For eyes closed, medial-lateral displacement decreased for Tai Chi (P<.01). For dynamic posturography, significant improvements in overall score (P=.001), time on the test (P=.006), and 2 linear measures in lateral (P=.001) and anterior-posterior (P<.001) directions were seen for the sample. Yoga was as effective as Tai Chi and standard balance training for improving postural stability and may offer an alternative to more traditional programs. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
The 1998 Australian external beam radiotherapy survey and IAEA/WHO TLD postal dose quality audit.
Huntley, R; Izewska, J
2000-03-01
The results of an updated Australian survey of external beam radiotherapy centres are presented. Most of the centres provided most of the requested information. The relative caseloads of various linear accelerator photon and electron beams have not changed significantly since the previous survey in 1995. The mean age of Australian LINACs is 7.1 years and that of other radiotherapy machines is 14.7 years. Every Australian radiotherapy centre participated in a special run of the IAEA/WHO TLD postal dose quality audit program, which was provided for Australian centres by the IAEA and WHO in May 1998. The dose quoted by the centres was in nearly every case within 1.5% of the dose assessed by the IAEA. This is within the combined standard uncertainty of the IAEA TLD service (1.8%). The results confirm the accuracy and precision of radiotherapy dosimetry in Australia and the adequate dissemination of the Australian standards from ARL (now ARPANSA) to the centres. The Australian standards have recently been shown to agree with those of other countries to within 0.25% by comparison with the BIPM.
Changes of Linearity in MF2 Index with R12 and Solar Activity Maximum
NASA Astrophysics Data System (ADS)
Villanueva, L.
2013-05-01
Critical frequency of F2 layer is related to the solar activity, and the sunspot number has been the standard index for ionospheric prediction programs. This layer, being considered the most important in HF radio communications due to its highest electron density, determines the maximum frequency coming back from ground base transmitter signals, and shows irregular variation in time and space. Nowadays the spatial variation, better understood due to the availability of TEC measurements, let Space Weather Centers have observations almost in real time. However, it is still the most difficult layer to predict in time. Short time variations are improved in IRI model, but long term predictions are only related to the well-known CCIR and URSI coefficients and Solar activity R12 predictions, (or ionospheric indexes in regional models). The concept of the "saturation" of the ionosphere is based on data observations around 3 solar cycles before 1970, (NBS, 1968). There is a linear relationship among MUF (0Km) and R12, for smooth Sunspot numbers R12 less than 100, but constant for higher R12, so, no rise of MUF is expected for R12 higher than 100. This recommendation has been used in most of the known Ionospheric prediction programs for HF Radio communication. In this work, observations of smoothed ionospheric index MF2 related to R12 are presented to find common features of the linear relationship, which is found to persist in different ranges of R12 depending on the specific maximum level of each solar cycle. In the analysis of individual solar cycles, the lapse of linearity is less than 100 for a low solar cycle and higher than 100 for a high solar cycle. To improve ionospheric predictions we can establish levels for solar cycle maximum sunspot numbers R12 around low 100, medium 150 and high 200 and specify the ranges of linearity of MUF(0Km) related to R12 which is not only 100 as assumed for all the solar cycles. For lower levels of solar cycle, discussions of present observations are presented.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Mccolgan, C. J.; Ladden, R. M.; Klatte, R. J.
1991-01-01
Results of the program for the generation of a computer prediction code for noise of advanced single rotation, turboprops (prop-fans) such as the SR3 model are presented. The code is based on a linearized theory developed at Hamilton Standard in which aerodynamics and acoustics are treated as a unified process. Both steady and unsteady blade loading are treated. Capabilities include prediction of steady airload distributions and associated aerodynamic performance, unsteady blade pressure response to gust interaction or blade vibration, noise fields associated with thickness and steady and unsteady loading, and wake velocity fields associated with steady loading. The code was developed on the Hamilton Standard IBM computer and has now been installed on the Cray XMP at NASA-Lewis. The work had its genesis in the frequency domain acoustic theory developed at Hamilton Standard in the late 1970s. It was found that the method used for near field noise predictions could be adapted as a lifting surface theory for aerodynamic work via the pressure potential technique that was used for both wings and ducted turbomachinery. In the first realization of the theory for propellers, the blade loading was represented in a quasi-vortex lattice form. This was upgraded to true lifting surface loading. Originally, it was believed that a purely linear approach for both aerodynamics and noise would be adequate. However, two sources of nonlinearity in the steady aerodynamics became apparent and were found to be a significant factor at takeoff conditions. The first is related to the fact that the steady axial induced velocity may be of the same order of magnitude as the flight speed and the second is the formation of leading edge vortices which increases lift and redistribute loading. Discovery and properties of prop-fan leading edge vortices were reported in two papers. The Unified AeroAcoustic Program (UAAP) capabilites are demonstrated and the theory verified by comparison with the predictions with data from tests at NASA-Lewis. Steady aerodyanmic performance, unsteady blade loading, wakes, noise, and wing and boundary layer shielding are examined.
On the Feasibility of a Generalized Linear Program
1989-03-01
generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Brown, Scott
2007-01-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.
NASA Astrophysics Data System (ADS)
Kabus, Irwin
1990-01-01
The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
A Flash X-Ray Facility for the Naval Postgraduate School
1985-06-01
ionizing radiation, *• NPS has had active programs with a Van de Graaff generator, a reactor, radioactive sources, X-ray machines and a linear electron ...interaction of radiation with matter and with coherent radiation. Currently the most active program is at the linear electron accelerator which over...twenty years has produced some 75 theses. The flash X-ray machine was obtained to expan-i and complement the capabilities of the linear electron
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
1992-12-01
desirable. In this study, the proposed model consists of a thick-walled, highly deformable elastic tube in which the blood flow is described by linearized ...presented a mechanical model consisting of linearized Navier-Stokes and finite elasticity equations to predict blood pooling under acceleration stress... linear multielement model of the cardiovascular system which can calculate blood pressures and flows at any point in the cardio- vascular system. It
Business Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for business technology education programs in Florida. Each program courses standard is composed of two parts: a curriculum framework and student performance standards. The curriculum framework includes four major…
Wang, Ya; Wang, Junsu; Xiang, Lu; Xi, Cunxian; Chen, Dongdong; Peng, Tao; Wang, Guomin; Mu, Zhaode
2014-05-01
A novel method was established for the determination and identification of biurea in flour and its products using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The biurea was extracted with water and oxidized to azodicarbonamide by potassium permanganate. The azodicarbonamide was then derivatized using sodium p-toluene sulfinate solution. The separation was performed on a Shimpak XR-ODS II column (150 mm x 2.0 mm, 2.2 microm) using the mobile phase composed of acetonitrile and 2 mmol/L ammonium acetate aqueous solution (containing 0.2% (v/v) formic acid) with a gradient elution program. Tandem mass spectrometric detection was performed in multiple reaction monitoring (MRM) scan mode with a positive electrospray ionization (ESI(+)) source. The method used stable isotope internal standard quantitation. The calibration curve showed good linearity over the range of 1-20 000 microg/kg (R2 = 0.999 9). The limit of quantification was 5 microg/kg for biurea spiked in flour and its products. At the spiking levels of 5.0, 10.0 and 50.0 microg/kg in different matrices, the average recovery o biurea was 78.3%-108.0% with the relative standard deviations (RSDs) < or = 5.73%. The method developed is novel, reliable and sensitive with wide linear range, and can be used to determine the biurea in flour and its products.
ERIC Educational Resources Information Center
KANTASEWI, NIPHON
THE PURPOSE OF THE STUDY WAS TO COMPARE THE EFFECTIVENESS OF (1) LECTURE PRESENTATIONS, (2) LINEAR PROGRAM USE IN CLASS WITH AND WITHOUT DISCUSSION, AND (3) LINEAR PROGRAMS USED OUTSIDE OF CLASS WITH INCLASS PROBLEMS OR DISCUSSION. THE 126 COLLEGE STUDENTS ENROLLED IN A BACTERIOLOGY COURSE WERE RANDOMLY ASSIGNED TO THREE GROUPS. IN A SUCCEEDING…
ERIC Educational Resources Information Center
Nowak, Christoph; Heinrichs, Nina
2008-01-01
A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…
User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Medan, R. T.
1979-01-01
Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.
2001-01-01
We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Armored DNA in recombinant Baculoviruses as controls in molecular genetic assays.
Freystetter, Andrea; Paar, Christian; Stekel, Herbert; Berg, Jörg
2017-10-01
The widespread use of molecular PCR-based assays in analytical and clinical laboratories brings about the need for test-specific, stable, and reliable external controls (EC) as well as standards and internal amplification controls (IC), in order to arrive at consistent test results. In addition, there is also a growing need to produce and provide stable, well-characterized molecular controls for quality assurance programs. In this study, we describe a novel approach to generate armored double-stranded DNA controls, which are encapsulated in baculovirus (BV) particles of the species Autographa californica multiple nucleopolyhedrovirus. We used the well-known BacPAK™ Baculovirus Expression System (Takara-Clontech), removed the polyhedrin promoter used for protein expression, and generated recombinant BV-armored DNAs. The obtained BV-armored DNAs were readily extracted by standard clinical DNA extraction methods, showed favorable linearity and performance in our clinical PCR assays, were resistant to DNase I digestion, and exhibited marked stability in human plasma and serum. BV-armored DNA ought to be used as ECs, quantification standards, and ICs in molecular assays, with the latter application allowing for the entire monitoring of clinical molecular assays for sample adequacy. BV-armored DNA may also be used to produce double-stranded DNA reference materials for, e.g., quality assurance programs. The ease to produce BV-armored DNA should make this approach feasible for a broad spectrum of molecular applications. Finally, as BV-armored DNAs are non-infectious to mammals, they may be even more conveniently shipped than clinical specimen.
Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.
Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H
1999-11-01
The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were < or = 1.9% and < or = 3.1% for glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.
A Rewriting-Based Approach to Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2002-01-01
We present a rewriting-based algorithm for efficiently evaluating future time Linear Temporal Logic (LTL) formulae on finite execution traces online. While the standard models of LTL are infinite traces, finite traces appear naturally when testing and/or monitoring red applications that only run for limited time periods. The presented algorithm is implemented in the Maude executable specification language and essentially consists of a set of equations establishing an executable semantics of LTL using a simple formula transforming approach. The algorithm is further improved to build automata on-the-fly from formulae, using memoization. The result is a very efficient and small Maude program that can be used to monitor program executions. We furthermore present an alternative algorithm for synthesizing probably minimal observer finite state machines (or automata) from LTL formulae, which can be used to analyze execution traces without the need for a rewriting system, and can hence be used by observers written in conventional programming languages. The presented work is part of an ambitious runtime verification and monitoring project at NASA Ames, called PATHEXPLORER, and demonstrates that rewriting can be a tractable and attractive means for experimenting and implementing program monitoring logics.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Program Flow Analyzer. Volume 3
1984-08-01
metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found
NASA Technical Reports Server (NTRS)
Bowman, L. M.
1984-01-01
An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
ERIC Educational Resources Information Center
Kansas State Univ., Manhattan. Dept. of Adult and Occupational Education.
This document contains recommended standards for quality vocational programs in agricultural/agribusiness education which are divided into (1) standards common to all programs, (2) standards specific to adult education in agriculture/agribusiness, and (3) standards specific to production agriculture, secondary. The sixty common standards are…
Cox, Joanne E; Buman, Matthew; Valenzuela, Jennifer; Joseph, Natalie Pierre; Mitchell, Anna; Woods, Elizabeth R
2008-10-01
To investigate the associations between depressive symptoms in adolescent mothers and their perceived maternal caretaking ability and social support. Subjects were participants enrolled in a parenting program that provided comprehensive multidisciplinary medical care to teen mothers and their children. Baseline data of a prospective cohort study were collected by interview at 2 weeks postpartum and follow-up, and standardized measures on entry into postnatal parenting groups. Demographic data included education, social supports, psychological history, family history and adverse life events. Depressive symptoms were measured with the Center for Epidemiological Studies Depression Scale for Children short version (CES-DC). The Maternal Self-report Inventory (MSRI) measured perceived maternal self-esteem, and Duke-UNC Functional Social Support Questionnaire measured social support. Data were analyzed with bivariate analyses and linear regression modeling focusing on depressive symptoms as the outcome variable. In the 168 teen mothers, mean age 17.6 +/- 1.2 years, African American (50%), Latina (31%) or Biracial (13%), the prevalence of depressive symptoms was 53.6%. In the linear model, controlling for baby's age, teen's age, ethnicity, Temporary Aid for Families with Dependent Children (TAFDC), and previous suicidal gesture, increased depressive symptoms were associated with decreased perceived maternal caretaking ability (P = 0.003) and lower social support (P < 0.001). In a linear model controlling for the same variables, MSRI total score (P = 0.001) and social support (P < 0.001) contributed significantly to the model as did the interaction term (MSRI x Social Support, P = 0.044). Depression is associated with decreased maternal confidence in their ability to parent and decreased perceived maternal social support, with a possible moderating effect of social support on the relationship of maternal self-esteem and depression.
Dibari, Filippo; Diop, El Hadji I; Collins, Steven; Seal, Andrew
2012-05-01
According to the United Nations (UN), 25 million children <5 y of age are currently affected by severe acute malnutrition and need to be treated using special nutritional products such as ready-to-use therapeutic foods (RUTF). Improved formulations are in demand, but a standardized approach for RUTF design has not yet been described. A method relying on linear programming (LP) analysis was developed and piloted in the design of a RUTF prototype for the treatment of wasting in East African children and adults. The LP objective function and decision variables consisted of the lowest formulation price and the weights of the chosen commodities (soy, sorghum, maize, oil, and sugar), respectively. The LP constraints were based on current UN recommendations for the macronutrient content of therapeutic food and included palatability, texture, and maximum food ingredient weight criteria. Nonlinear constraints for nutrient ratios were converted to linear equations to allow their use in LP. The formulation was considered accurate if laboratory results confirmed an energy density difference <10% and a protein or lipid difference <5 g · 100 g(-1) compared to the LP formulation estimates. With this test prototype, the differences were 7%, and 2.3 and -1.0 g · 100 g(-1), respectively, and the formulation accuracy was considered good. LP can contribute to the design of ready-to-use foods (therapeutic, supplementary, or complementary), targeting different forms of malnutrition, while using commodities that are cheaper, regionally available, and meet local cultural preferences. However, as with all prototype feeding products for medical use, composition analysis, safety, acceptability, and clinical effectiveness trials must be conducted to validate the formulation.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
NASA Technical Reports Server (NTRS)
Mitchell, C. E.; Eckert, K.
1979-01-01
A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.
Technology Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program course standards for technology education programs in Florida. Standards are provided for a total of 32 exploratory courses, practical arts courses, and pretechnical programs offered at the secondary or postsecondary level. Each program course standard consists of a curriculum framework and…
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
NASA Astrophysics Data System (ADS)
Zhao, Dang-Jun; Song, Zheng-Yu
2017-08-01
This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.
Williams, Quinn I; Gunn, Alexander H; Beaulieu, John E; Benas, Bernadette C; Buley, Bruce; Callahan, Leigh F; Cantrell, John; Genova, Andrew P; Golightly, Yvonne M; Goode, Adam P; Gridley, Christopher I; Gross, Michael T; Heiderscheit, Bryan C; Hill, Carla H; Huffman, Kim M; Kline, Aaron; Schwartz, Todd A; Allen, Kelli D
2015-09-28
Physical activity improves pain and function among individuals with knee osteoarthritis (OA), but most people with this condition are inactive. Physical therapists play a key role in helping people with knee OA to increase appropriate physical activity. However, health care access issues, financial constraints, and other factors impede some patients from receiving physical therapy (PT) for knee OA. A need exists to develop and evaluate other methods to provide physical activity instruction and support to people with knee OA. This study is examining the effectiveness of an internet-based exercise training (IBET) program designed for knee OA, designed by physical therapists and other clinicians. This is a randomized controlled trial of 350 participants with symptomatic knee OA, allocated to three groups: IBET, standard PT, and a wait list (WL) control group (in a 2:2:1 ratio, respectively). The study was funded by the Patient Centered Outcomes Research Institute, which conducted a peer review of the proposal. The IBET program provides patients with a tailored exercise program (based on functional level, symptoms, and current activity), video demonstrations of exercises, and guidance for appropriate exercise progression. The PT group receives up to 8 individual visits with a physical therapist, mirroring standard practice for knee OA and with an emphasis on a home exercise program. Outcomes are assessed at baseline, 4 months (primary time point) and 12 months (to assess maintenance of treatment effects). The primary outcome is the Western Ontario and McMaster Universities Osteoarthritis Index, and secondary outcomes include objective physical function, satisfaction with physical function, physical activity, depressive symptoms and global assessment of change. Linear mixed models will be used to compare both the IBET and standard PT groups to the WL control group, examine whether IBET is non-inferior to PT (a treatment that has an established evidence base for knee OA), and explore whether participant characteristics are associated with differential effects of IBET and/or standard PT. This research is in compliance with the Helsinki Declaration and was approved by the Institutional Review Board of the University of North Carolina at Chapel Hill. The IBET program could be disseminated widely at relatively low cost and could be an important resource for helping patients with knee OA to adopt and maintain appropriate physical activity. This trial will provide an important evaluation of the effectiveness of this IBET program for knee OA. NCT02312713.
Linear Goal Programming as a Military Decision Aid.
1988-04-01
JAMES F. MAJOR9 USAF 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT IFROM____ TO 1988 APRIL 64 16...air warfare, advanced armour warfare, the potential f or space warfare, and many other advances have expanded the breadth of weapons employed to the...written by A. Charnes and W. W. Cooper, Management Models and Industrial Applications of Linear Programming In 1961.(3:5) Since this time linear
Comparative Effectiveness of Two Walking Interventions on Participation, Step Counts, and Health.
Smith-McLallen, Aaron; Heller, Debbie; Vernisi, Kristin; Gulick, Diana; Cruz, Samantha; Snyder, Richard L
2017-03-01
To (1) compare the effects of two worksite-based walking interventions on employee participation rates; (2) compare average daily step counts between conditions, and; (3) examine the effects of increases in average daily step counts on biometric and psychologic outcomes. We conducted a cluster-randomized trial in which six employer groups were randomly selected and randomly assigned to condition. Four manufacturing worksites and two office-based worksite served as the setting. A total of 474 employees from six employer groups were included. A standard walking program was compared to an enhanced program that included incentives, feedback, competitive challenges, and monthly wellness workshops. Walking was measured by self-reported daily step counts. Survey measures and biometric screenings were administered at baseline and 3, 6, and 9 months after baseline. Analysis used linear mixed models with repeated measures. During 9 months, participants in the enhanced condition averaged 726 more steps per day compared with those in the standard condition (p < .001). A 1000-step increase in average daily steps was associated with significant weight loss for both men (-3.8 lbs.) and women (-2.1 lbs.), and reductions in body mass index (-0.41 men, -0.31 women). Higher step counts were also associated with improvements in mood, having more energy, and higher ratings of overall health. An enhanced walking program significantly increases participation rates and daily step counts, which were associated with weight loss and reductions in body mass index.
Health Occupations Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program course standards for health occupations programs in Florida. Standards are provided for a total of 71 exploratory courses, practical arts courses, and job preparatory programs offered at the secondary or postsecondary level. Each program courses standard consists of a curriculum framework and…
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Notes of the Design of Two Supercavitating Hydrofoils
1975-07-01
Foil Section Characteristics Definition Tulin Two -Term Levi - Civita Larock and Street Two -Term three pararreter Prcgram and Inputs linearized two ...36 NOMENCLATURE Symbol Description Dimensions AIA 2 Angle distribution multipliers in Levi - radians Civita Program AR Aspect ratio CL Lift coefficient...angle of attack radian B Constant angle in Levi - Civita program radian 6 Linearized angle of attack superposed degrees C Wu’s 1955 program parameter
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Alcator C-Mod Digital Plasma Control System
NASA Astrophysics Data System (ADS)
Wolfe, S. M.
2005-10-01
A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.
Preliminary SPE Phase II Far Field Ground Motion Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steedman, David W.
2014-03-06
Phase II of the Source Physics Experiment (SPE) program will be conducted in alluvium. Several candidate sites were identified. These include existing large diameter borehole U1e. One criterion for acceptance is expected far field ground motion. In June 2013 we were requested to estimate peak response 2 km from the borehole due to the largest planned SPE Phase II experiment: a contained 50- Ton event. The cube-root scaled range for this event is 5423 m/KT 1/3. The generally accepted first order estimate of ground motions from an explosive event is to refer to the standard data base for explosive eventsmore » (Perrett and Bass, 1975). This reference is a compilation and analysis of ground motion data from numerous nuclear and chemical explosive events from Nevada National Security Site (formerly the Nevada Test Site, or NTS) and other locations. The data were compiled and analyzed for various geologic settings including dry alluvium, which we believe is an accurate descriptor for the SPE Phase II setting. The Perrett and Bass plots of peak velocity and peak yield-scaled displacement, both vs. yield-scaled range, are provided here. Their analysis of both variables resulted in bi-linear fits: a close-in non-linear regime and a more distant linear regime.« less
The Next Linear Collider Program
The Next Linear Collider at SLAC Navbar NLC Playpen Warning: This page is provided as a place for Comments & Suggestions | Desktop Trouble Call | Linear Collider Group at FNAL || This page was updated
Microwave and Electron Beam Computer Programs
1988-06-01
Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on
Interior-Point Methods for Linear Programming: A Review
ERIC Educational Resources Information Center
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,
Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)
Østbye, Truls; Krause, Katrina M; Swamy, Geeta K; Lovelady, Cheryl A
2010-11-01
Pregnancy-related weight retention can contribute to obesity, and breastfeeding may facilitate postpartum weight loss. We investigated the effect of breastfeeding on long-term postpartum weight retention. Using data from the North Carolina Special Supplemental Nutrition Program for Women, Infants, and Children (WIC; 1996-2004), weight retention was assessed in women aged 18 years or older who had more than one pregnancy available for analysis (n=32,920). Using multivariable linear regression, the relationship between duration of breastfeeding after the first pregnancy and change in pre-pregnancy weight from the first pregnancy to the second pregnancy was estimated, controlling for demographic and weight-related covariates. Mean time between pregnancies was 2.8 years (standard deviation (SD) 1.5), and mean weight retention from the first to the second pregnancy was 4.9kg (SD 8.7). In covariate-adjusted analyses, breastfeeding for 20 weeks or more resulted in 0.39kg (standard error (SE) 0.18) less weight retention at the beginning of the second pregnancy relative to no breastfeeding (p=0.025). In this large, racially diverse sample of low-income women, long-term weight retention was lower among those who breastfed for at least 20 weeks. Copyright © 2010 Elsevier Inc. All rights reserved.
The Next Linear Collider Program-News
The Next Linear Collider at SLAC Navbar The Next Linear Collider In The Press The Secretary of Linear Collider is a high-priority goal of this plan. http://www.sc.doe.gov/Sub/Facilities_for_future/20 -term projects in conceputal stages (the Linear Collider is the highest priority project in this
Automating linear accelerator quality assurance.
Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M
2015-10-01
The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC position, as determined by EPID measurements, across the consortium was 0.33 mm for IMRT fields. With respect to the log files, the deviations between expected and actual positions for parameters were small (<0.12 mm) for all Linacs. Considering both log files and EPID measurements, all parameters were well within published tolerance values. Variations in collimator angle, MLC position, and gantry sag were also evaluated for all Linacs. The performance of the TrueBeam Linac model was shown to be consistent based on automated analysis of trajectory log files and EPID images acquired during delivery of a standardized test suite. The results can be compared directly to tolerance thresholds. In addition, sharing of results from standard tests across institutions can facilitate the identification of QA process and Linac changes. These reference values are presented along with the standard deviation for common tests so that the test suite can be used by other centers to evaluate their Linac performance against those in this consortium.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
HOLEGAGE 1.0 - STRAIN GAGE HOLE DRILLING ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Hampton, R. W.
1994-01-01
There is no simple and perfect way to measure residual stresses in metal parts that have been welded or deformed to make complex structures such as pressure vessels and aircraft, yet these locked-in stresses can contribute to structural failure by fatigue and fracture. However, one proven and tested technique for determining the internal stress of a metal part is to drill a test hole while measuring the relieved strains around the hole, such as the hole-drilling strain gage method described in ASTM E 837. The program HOLEGAGE processes strain gage data and provides additional calculations of internal stress variations that are not obtained with standard E 837 analysis methods. The typical application of the technique uses a three gage rosette with a special hole-drilling fixture for drilling a hole through the center of the rosette to produce a hole with very small gage pattern eccentricity error. Another device is used to control the drilling and halt the drill at controlled depth steps. At each step, strains from all three strain gages are recorded. The influence coefficients used by HOLEGAGE to compute stresses from relieved hole strains were developed by published finite element method studies of thick plates for specific hole sizes and depths. The program uses a parabolic fit and an interpolating scheme to project the coefficients to other hole sizes and depths. Additionally, published experimental data are used to extend the coefficients to relatively thin plates. These influence coefficients are used to compute the stresses in the original part from the strain data. HOLEGAGE will compute interior planar stresses using strain data from each drilled hole depth layer. Planar stresses may be computed in three ways including: a least squares fit for a linear variation with depth, an integral method to give incremental stress data for each layer, or by a linear fit to the integral data (with some surface data points omitted) to predict surface stresses before strain gage sanding preparations introduced additional residual stresses. Options are included for estimating the effect of hole eccentricity on calculations, smoothing noise from the strain data, and inputting the program data either interactively or from a data file. HOLEGAGE was written in FORTRAN 77 for DEC VAX computers under VMS, and is transportable except for system-unique TIME and DATE system calls. The program requires 54K of main memory and was developed in 1990. The program is available on a 9-track 1600 BPI VAX BACKUP format magnetic tape (standard media) or a TK50 tape cartridge. The documentation is included on the tape. DEC VAX and VMS are trademarks of Digital Equipment Corporation.
An Inquiry-Based Linear Algebra Class
ERIC Educational Resources Information Center
Wang, Haohao; Posey, Lisa
2011-01-01
Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Fault detection and initial state verification by linear programming for a class of Petri nets
NASA Technical Reports Server (NTRS)
Rachell, Traxon; Meyer, David G.
1992-01-01
The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
LDRD final report on massively-parallel linear programming : the parPCx system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less
14 CFR 151.99 - Modifications of programming standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Modifications of programming standards. 151... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Project Programming Standards § 151.99 Modifications of programming standards. The Director, Airports, Service, or the Regional Director concerned may, on individual...
14 CFR 151.99 - Modifications of programming standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Modifications of programming standards. 151... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Project Programming Standards § 151.99 Modifications of programming standards. The Director, Airports, Service, or the Regional Director concerned may, on individual...
14 CFR 151.99 - Modifications of programming standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Modifications of programming standards. 151... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Project Programming Standards § 151.99 Modifications of programming standards. The Director, Airports, Service, or the Regional Director concerned may, on individual...
Industrial Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for exploratory courses, practical arts courses, and job preparatory programs offered at the secondary or postsecondary level in Florida. Each program courses standard is composed of two parts: a curriculum framework and…
14 CFR 151.99 - Modifications of programming standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Modifications of programming standards. 151... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Project Programming Standards § 151.99 Modifications of programming standards. The Director, Airports, Service, or the Regional Director concerned may, on individual...
14 CFR 151.99 - Modifications of programming standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Modifications of programming standards. 151... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Project Programming Standards § 151.99 Modifications of programming standards. The Director, Airports, Service, or the Regional Director concerned may, on individual...
Academic productivity among fellowship associated adult total joint reconstruction surgeons.
Khan, Adam Z; Kelley, Benjamin V; Patel, Ankur D; McAllister, David R; Leong, Natalie L
2017-12-01
The Hirsch index (h-index) is a measure that evaluates both research volume and quality-taking into consideration both publications and citations of a single author. No prior work has evaluated academic productivity and contributions to the literature of adult total joint replacement surgeons. This study uses h-index to benchmark the academic impact and identify characteristics associated with productivity of faculty members at joint replacement fellowships. Adult reconstruction fellowship programs were obtained via the American Association of Hip and Knee Surgeons website. Via the San Francisco match and program-specific websites, program characteristics (Accreditation Council for Graduate Medical Education approval, academic affiliation, region, number of fellows, fellow research requirement), associated faculty members, and faculty-specific characteristics (gender, academic title, formal fellowship training, years in practice) were obtained. H-index and total faculty publications served as primary outcome measures. Multivariable linear regression determined statistical significance. Sixty-six adult total joint reconstruction fellowship programs were identified: 30% were Accreditation Council for Graduate Medical Education approved and 73% had an academic affiliation. At these institutions, 375 adult reconstruction surgeons were identified; 98.1% were men and 85.3% had formal arthroplasty fellowship training. Average number of publications per faculty member was 50.1 (standard deviation 76.8; range 0-588); mean h-index was 12.8 (standard deviation 13.8; range 0-67). Number of fellows, faculty academic title, years in practice, and formal fellowship training had a significant ( P < .05) positive correlation with both h-index and total publications. The statistical overview presented in this work can help total joint surgeons quantitatively benchmark their academic performance against that of their peers.
Optimal Facility Location Tool for Logistics Battle Command (LBC)
2015-08-01
64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
LCPT: a program for finding linear canonical transformations. [In MACSYMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Char, B.W.; McNamara, B.
This article describes a MACSYMA program to compute symbolically a canonical linear transformation between coordinate systems. The difficulties in implementation of this canonical small physics problem are also discussed, along with the implications that may be drawn from such difficulties about widespread MACSYMA usage by the community of computational/theoretical physicists.
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
A model for managing sources of groundwater pollution
Gorelick, Steven M.
1982-01-01
The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.
Modification and Adaptation of the Program Evaluation Standards in Saudi Arabia
ERIC Educational Resources Information Center
Alyami, Mohammed
2013-01-01
The Joint Committee on Standards for Educational Evaluation's Program Evaluation Standards is probably the most recognized and applied set of evaluation standards globally. The most recent edition of The Program Evaluation Standards includes five categories and 30 standards. The five categories are Utility, Feasibility, Propriety, Accuracy, and…
NASA Technical Standards Program
NASA Technical Reports Server (NTRS)
Gill, Paul S.; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)
2002-01-01
The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program's function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned-standards integration system. The Program maintains a 'one stop-shop' Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.
NASA Technical Standards Program
NASA Technical Reports Server (NTRS)
Gill, Paul S.; Vaughan, WIlliam W.
2003-01-01
The NASA Technical Standards Program was officially established in 1997 as result of a directive issued by the Administrator. It is responsible for Agency wide technical standards development, adoption (endorsement), and conversion of Center-unique standards for Agency wide use. One major element of the Program is the review of NSA technical standards products and replacement with non-Government Voluntary Consensus Standards in accordance with directions issued by the Office of Management and Budget. As part of the Program s function, it developed a NASA Integrated Technical Standards Initiative that consists of and Agency wide full-text system, standards update notification system, and lessons learned - standards integration system. The Program maintains a "one stop-shop" Website for technical standards ad related information on aerospace materials, etc. This paper provides information on the development, current status, and plans for the NAS Technical Standards Program along with metrics on the utility of the products provided to both users within the nasa.gov Domain and the Public Domain.
FPT- FORTRAN PROGRAMMING TOOLS FOR THE DEC VAX
NASA Technical Reports Server (NTRS)
Ragosta, A. E.
1994-01-01
The FORTRAN Programming Tools (FPT) are a series of tools used to support the development and maintenance of FORTRAN 77 source codes. Included are a debugging aid, a CPU time monitoring program, source code maintenance aids, print utilities, and a library of useful, well-documented programs. These tools assist in reducing development time and encouraging high quality programming. Although intended primarily for FORTRAN programmers, some of the tools can be used on data files and other programming languages. BUGOUT is a series of FPT programs that have proven very useful in debugging a particular kind of error and in optimizing CPU-intensive codes. The particular type of error is the illegal addressing of data or code as a result of subtle FORTRAN errors that are not caught by the compiler or at run time. A TRACE option also allows the programmer to verify the execution path of a program. The TIME option assists the programmer in identifying the CPU-intensive routines in a program to aid in optimization studies. Program coding, maintenance, and print aids available in FPT include: routines for building standard format subprogram stubs; cleaning up common blocks and NAMELISTs; removing all characters after column 72; displaying two files side by side on a VT-100 terminal; creating a neat listing of a FORTRAN source code including a Table of Contents, an Index, and Page Headings; converting files between VMS internal format and standard carriage control format; changing text strings in a file without using EDT; and replacing tab characters with spaces. The library of useful, documented programs includes the following: time and date routines; a string categorization routine; routines for converting between decimal, hex, and octal; routines to delay process execution for a specified time; a Gaussian elimination routine for solving a set of simultaneous linear equations; a curve fitting routine for least squares fit to polynomial, exponential, and sinusoidal forms (with a screen-oriented editor); a cubic spline fit routine; a screen-oriented array editor; routines to support parsing; and various terminal support routines. These FORTRAN programming tools are written in FORTRAN 77 and ASSEMBLER for interactive and batch execution. FPT is intended for implementation on DEC VAX series computers operating under VMS. This collection of tools was developed in 1985.
Development of NASA Technical Standards Program Relative to Enhancing Engineering Capabilities
NASA Technical Reports Server (NTRS)
Gill, Paul S.; Vaughan, William W.
2003-01-01
The enhancement of engineering capabilities is an important aspect of any organization; especially those engaged in aerospace development activities. Technical Standards are one of the key elements of this endeavor. The NASA Technical Standards Program was formed in 1997 in response to the NASA Administrator s directive to develop an Agencywide Technical Standards Program. The Program s principal objective involved the converting Center-unique technical standards into Agency wide standards and the adoption/endorsement of non-Government technical standards in lieu of government standards. In the process of these actions, the potential for further enhancement of the Agency s engineering capabilities was noted relative to value of being able to access Agencywide the necessary full-text technical standards, standards update notifications, and integration of lessons learned with technical standards, all available to the user from one Website. This was accomplished and is now being enhanced based on feedbacks from the Agency's engineering staff and supporting contractors. This paper addresses the development experiences with the NASA Technical Standards Program and the enhancement of the Agency's engineering capabilities provided by the Program s products. Metrics are provided on significant aspects of the Program.
Breton, J J
1999-04-01
Confusion regarding definitions and standards of prevention and promotion programs is pervasive, as revealed by a review of such programs in Canada. This paper examines how a discussion of scientific paradigms can help clarify models of prevention and mental health promotion and proposes the complementary development of prevention and promotion programs. A paradigm shift in science contributed to the emergence of the transactional model, advocating multiple causes and dynamic transactions between the individual and the environment. Consequently, the view of prevention applying over a linear continuum and of single stressful events causing mental disorders may no longer be appropriate. It is the author's belief that the new science of chaos theory, which addresses processes involved in the development of systems, can be applied to child development and thus to the heart of prevention and promotion programs. Critical moments followed by transitions or near-chaotic behaviours lead to stable states better adapted to the environment. Prevention programs would focus on the critical moments and target groups at risk to reduce risk factors. Promotion programs would focus on stable states and target the general population to develop age-appropriate life skills. The concept of sensitive dependence on initial conditions and certain empirical studies suggest that the programs would have the greatest impact at the beginning of life. It is hoped that this effort to organize knowledge about conceptual models of prevention and mental health promotion programs will foster the development of these programs to meet the urgent needs of Canadian children.
Health Occupations Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards for exploratory courses, practical arts courses, and job preparatory programs offered at the secondary or postsecondary level. Each program standard is composed of two parts: a curriculum framework and student performance standards. The curriculum framework includes four major…
Kanamori, Shogo; Castro, Marcia C; Sow, Seydou; Matsuno, Rui; Cissokho, Alioune; Jimba, Masamine
2016-01-01
The 5S method is a lean management tool for workplace organization, with 5S being an abbreviation for five Japanese words that translate to English as Sort, Set in Order, Shine, Standardize, and Sustain. In Senegal, the 5S intervention program was implemented in 10 health centers in two regions between 2011 and 2014. To identify the impact of the 5S intervention program on the satisfaction of clients (patients and caretakers) who visited the health centers. A standardized 5S intervention protocol was implemented in the health centers using a quasi-experimental separate pre-post samples design (four intervention and three control health facilities). A questionnaire with 10 five-point Likert items was used to measure client satisfaction. Linear regression analysis was conducted to identify the intervention's effect on the client satisfaction scores, represented by an equally weighted average of the 10 Likert items (Cronbach's alpha=0.83). Additional regression analyses were conducted to identify the intervention's effect on the scores of each Likert item. Backward stepwise linear regression ( n= 1,928) indicated a statistically significant effect of the 5S intervention, represented by an increase of 0.19 points in the client satisfaction scores in the intervention group, 6 to 8 months after the intervention ( p= 0.014). Additional regression analyses showed significant score increases of 0.44 ( p= 0.002), 0.14 ( p= 0.002), 0.06 ( p= 0.019), and 0.17 ( p= 0.044) points on four items, which, respectively were healthcare staff members' communication, explanations about illnesses or cases, and consultation duration, and clients' overall satisfaction. The 5S has the potential to improve client satisfaction at resource-poor health facilities and could therefore be recommended as a strategic option for improving the quality of healthcare service in low- and middle-income countries. To explore more effective intervention modalities, further studies need to address the mechanisms by which 5S leads to attitude changes in healthcare staff.
DOE standard: The Department of Energy Laboratory Accreditation Program for radiobioassay
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-12-01
This technical standard describes the US Department of Energy Laboratory Accreditation Program (DOELAP) for Radiobioassay, for use by the US Department of Energy (DOE) and DOE Contractor radiobioassay programs. This standard is intended to be used in conjunction with the general administrative technical standard that describes the overall DOELAP accreditation process--DOE-STD-1111-98, Department of Energy Laboratory Accreditation Program Administration. This technical standard pertains to radiobioassay service laboratories that provide either direct or indirect (in vivo or in vitro) radiobioassay measurements in support of internal dosimetry programs at DOE facilities or for DOE and DOE contractors. Similar technical standards have been developedmore » for other DOELAP dosimetry programs. This program consists of providing an accreditation to DOE radiobioassay programs based on successful completion of a performance-testing process and an on-site evaluation by technical experts. This standard describes the technical requirements and processes specific to the DOELAP Radiobioassay Accreditation Program as required by 10 CFR 835 and as specified generically in DOE-STD-1111-98.« less
Nutrient density score of typical Indonesian foods and dietary formulation using linear programming.
Jati, Ignasius Radix A P; Vadivel, Vellingiri; Nöhr, Donatus; Biesalski, Hans Konrad
2012-12-01
The present research aimed to analyse the nutrient density (ND), nutrient adequacy score (NAS) and energy density (ED) of Indonesian foods and to formulate a balanced diet using linear programming. Data on typical Indonesian diets were obtained from the Indonesian Socio-Economic Survey 2008. ND was investigated for 122 Indonesian foods. NAS was calculated for single nutrients such as Fe, Zn and vitamin A. Correlation analysis was performed between ND and ED, as well as between monthly expenditure class and food consumption pattern in Indonesia. Linear programming calculations were performed using the software POM-QM for Windows version 3. Republic of Indonesia, 2008. Public households (n 68 800). Vegetables had the highest ND of the food groups, followed by animal-based foods, fruits and staple foods. Based on NAS, the top ten food items for each food group were identified. Most of the staple foods had high ED and contributed towards daily energy fulfillment, followed by animal-based foods, vegetables and fruits. Commodities with high ND tended to have low ED. Linear programming could be used to formulate a balanced diet. In contrast to staple foods, purchases of fruit, vegetables and animal-based foods increased with the rise of monthly expenditure. People should select food items based on ND and NAS to alleviate micronutrient deficiencies in Indonesia. Dietary formulation calculated using linear programming to achieve RDA levels for micronutrients could be recommended for different age groups of the Indonesian population.
Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole
2008-06-01
Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
Anomalous dielectric relaxation with linear reaction dynamics in space-dependent force fields.
Hong, Tao; Tang, Zhengming; Zhu, Huacheng
2016-12-28
The anomalous dielectric relaxation of disordered reaction with linear reaction dynamics is studied via the continuous time random walk model in the presence of space-dependent electric field. Two kinds of modified reaction-subdiffusion equations are derived for different linear reaction processes by the master equation, including the instantaneous annihilation reaction and the noninstantaneous annihilation reaction. If a constant proportion of walkers is added or removed instantaneously at the end of each step, there will be a modified reaction-subdiffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps, there will be a standard linear reaction kinetics term but a fractional order temporal derivative operating on an anomalous diffusion term. The dielectric polarization is analyzed based on the Legendre polynomials and the dielectric properties of both reactions can be expressed by the effective rotational diffusion function and component concentration function, which is similar to the standard reaction-diffusion process. The results show that the effective permittivity can be used to describe the dielectric properties in these reactions if the chemical reaction time is much longer than the relaxation time.
Users manual for flight control design programs
NASA Technical Reports Server (NTRS)
Nalbandian, J. Y.
1975-01-01
Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.
Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin
2017-12-06
Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
Libraries for Software Use on Peregrine | High-Performance Computing | NREL
-specific libraries. Libraries List Name Description BLAS Basic Linear Algebra Subroutines, libraries only managing hierarchically structured data. LAPACK Standard Netlib offering for computational linear algebra
The Next Linear Collider Program
text only International Study Group (ISG) Meetings NLC Home Page NLC Technical SLAC Eleventh Linear Collider International Study Group at KEK, December 16 - 19, 2003 Tenth (X) Linear Collider International Study Group at SLAC, June, 2003 Nineth Linear Collider ,International Study Group at KEK, December 10-13
Caution on the use of liquid nitrogen traps in stable hydrogen isotope-ratio mass spectrometry
Coplen, Tyler B.; Qi, Haiping
2010-01-01
An anomalous stable hydrogen isotopic fractionation of 4 ‰ in gaseous hydrogen has been correlated with the process of adding liquid nitrogen (LN2) to top off the dewar of a stainless-steel water trap on a gaseous hydrogen-water platinum equilibration system. Although the cause of this isotopic fractionation is unknown, its effect can be mitigated by (1) increasing the capacity of any dewars so that they do not need to be filled during a daily analytic run, (2) interspersing isotopic reference waters among unknowns, and (3) applying a linear drift correction and linear normalization to isotopic results with a program such as Laboratory Information Management System (LIMS) for Light Stable Isotopes. With adoption of the above guidelines, measurement uncertainty can be substantially improved. For example, the long-term (months to years) δ2H reproducibility (1& sigma; standard deviation) of nine local isotopic reference waters analyzed daily improved substantially from about 1‰ to 0.58 ‰. This isotopically fractionating mechanism might affect other isotope-ratio mass spectrometers in which LN2 is used as a moisture trap for gaseous hydrogen
Zhang, Meng-Qi; Jia, Jing-Ying; Lu, Chuan; Liu, Gang-Yi; Yu, Cheng-Yin; Gui, Yu-Zhou; Liu, Yun; Liu, Yan-Mei; Wang, Wei; Li, Shui-Jun; Yu, Chen
2010-06-01
A simple, reliable and sensitive liquid chromatography-isotope dilution mass spectrometry (LC-ID/MS) was developed and validated for quantification of olanzapine in human plasma. Plasma samples (50 microL) were extracted with tert-butyl methyl ether and isotope-labeled internal standard (olanzapine-D3) was used. The chromatographic separation was performed on XBridge Shield RP 18 (100 mm x 2.1 mm, 3.5 microm, Waters). An isocratic program was used at a flow rate of 0.4 m x min(-1) with mobile phase consisting of acetonitrile and ammonium buffer (pH 8). The protonated ions of analytes were detected in positive ionization by multiple reactions monitoring (MRM) mode. The plasma method, with a lower limit of quantification (LLOQ) of 0.1 ng x mL(-1), demonstrated good linearity over a range of 0.1 - 30 ng x mL(-1) of olanzapine. Specificity, linearity, accuracy, precision, recovery, matrix effect and stability were evaluated during method validation. The validated method was successfully applied to analyzing human plasma samples in bioavailability study.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This publication contains statewide standards for the masonry program in Georgia. The standards are divided into 12 categories: foundations (philosophy, purpose, goals, program objectives, availability, evaluation); admissions (admission requirements, provisional admission requirements, recruitment, evaluation and planning); program structure…
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
Maternal Quality Standards for Children's Television Programs.
ERIC Educational Resources Information Center
Nikken, Peter; And Others
1996-01-01
Investigates the standards mothers use to evaluate four types of children's television programs: (1) cartoons; (2) news programs for children; (3) educational children's programs; and (4) dramatic children's programs. Three quality standards considered most important were comprehensibility, aesthetic quality, and elicitation of involvement.…
Schwarz maps of algebraic linear ordinary differential equations
NASA Astrophysics Data System (ADS)
Sanabria Malagón, Camilo
2017-12-01
A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
A Partitioning and Bounded Variable Algorithm for Linear Programming
ERIC Educational Resources Information Center
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Radar Resource Management in a Dense Target Environment
2014-03-01
problem faced by networked MFRs . While relaxing our assumptions concerning information gain presents numerous challenges worth exploring, future research...linear programming MFR multifunction phased array radar MILP mixed integer linear programming NATO North Atlantic Treaty Organization PDF probability...1: INTRODUCTION Multifunction phased array radars ( MFRs ) are capable of performing various tasks in rapid succession. The performance of target search
Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/
NASA Technical Reports Server (NTRS)
Hatfield, J.
1967-01-01
CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
TI-59 Programs for Multiple Regression.
1980-05-01
general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
Writing and compiling code into biochemistry.
Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab
2010-01-01
This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
Monitoring Programs Using Rewriting
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Lan, Sonie (Technical Monitor)
2001-01-01
We present a rewriting algorithm for efficiently testing future time Linear Temporal Logic (LTL) formulae on finite execution traces, The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive in most past applications of LTL, theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications, corresponding to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property end then suggest an optimized algorithm based on transforming LTL formulae. We use the Maude rewriting logic, which turns out to be a good notation and being supported by an efficient rewriting engine for performing these experiments. The work constitutes part of the Java PathExplorer (JPAX) project, the purpose of which is to develop a flexible tool for monitoring Java program executions.
Generalized hydrodynamics and non-equilibrium steady states in integrable many-body quantum systems
NASA Astrophysics Data System (ADS)
Vasseur, Romain; Bulchandani, Vir; Karrasch, Christoph; Moore, Joel
The long-time dynamics of thermalizing many-body quantum systems can typically be described in terms of a conventional hydrodynamics picture that results from the decay of all but a few slow modes associated with standard conservation laws (such as particle number, energy, or momentum). However, hydrodynamics is expected to fail for integrable systems that are characterized by an infinite number of conservation laws, leading to unconventional transport properties and to complex non-equilibrium states beyond the traditional dogma of statistical mechanics. In this talk, I will describe recent attempts to understand such stationary states far from equilibrium using a generalized hydrodynamics picture. I will discuss the consistency of ``Bethe-Boltzmann'' kinetic equations with linear response Drude weights and with density-matrix renormalization group calculations. This work was supported by the Department of Energy through the Quantum Materials program (R. V.), NSF DMR-1206515, AFOSR MURI and a Simons Investigatorship (J. E. M.), DFG through the Emmy Noether program KA 3360/2-1 (C. K.).
Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.
2009-10-01
This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.
Three-dimensional modeling of flexible pavements : research implementation plan.
DOT National Transportation Integrated Search
2006-02-14
Many of the asphalt pavement analysis programs are based on linear elastic models. A linear viscoelastic models : would be superior to linear elastic models for analyzing the response of asphalt concrete pavements to loads. There : is a need to devel...
NASA Astrophysics Data System (ADS)
Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.
2017-03-01
We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.
Space Flyable Hg(sup +) Frequency Standards
NASA Technical Reports Server (NTRS)
Prestage, John D.; Maleki, Lute
1994-01-01
We discuss a design for a space based atomic frequency standard (AFS) based on Hg(sup +) ions confined in a linear ion trap. This newly developed AFS should be well suited for space borne applications because it can supply the ultra-high stability of a H-maser but its total mass is comparable to that of a NAVSTAR/GPS cesium clock, i.e., about 11kg. This paper will compare the proposed Hg(sup +) AFS to the present day GPS cesium standards to arrive at the 11 kg mass estimate. The proposed space borne Hg(sup +) standard is based upon the recently developed extended linear ion trap architecture which has reduced the size of existing trapped Hg(sup +) standards to a physics package which is comparable in size to a cesium beam tube. The demonstrated frequency stability to below 10(sup -15) of existing Hg(sup +) standards should be maintained or even improved upon in this new architecture. This clock would deliver far more frequency stability per kilogram than any current day space qualified standard.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of snapshot imaging spectrometer in environmental detection
NASA Astrophysics Data System (ADS)
Sun, Kai; Qin, Xiaolei; Zhang, Yu; Wang, Jinqiang
2017-10-01
This study aimed at the application of snapshot imaging spectrometer in environmental detection. The simulated sewage and dyeing wastewater were prepared and the optimal experimental conditions were determined. The white LED array was used as the detection light source and the image of the sample was collected by the imaging spectrometer developed in the laboratory to obtain the spectral information of the sample in the range of 400-800 nm. The standard curve between the absorbance and the concentration of the samples was established. The linear range of a single component of Rhoda mine B was 1-50 mg/L, the linear correlation coefficient was more than 0.99, the recovery was 93%-113% and the relative standard deviations (RSD) was 7.5%. The linear range of chemical oxygen demand (COD) standard solution was 50-900mg/L, the linear correlation coefficient was 0.981, the recovery was 91% -106% and the relative standard deviation (RSD) was 6.7%. The rapid, accurate and precise method for detecting dyes showed an excellent promise for on-site and emergency detection in environment. At the request of the proceedings editor, an updated version of this article was published on 17 October 2017. The original version of this article was replaced due to an accidental inversion of Figure 2 and Figure 3. The Figures have been corrected in the updated and republished version.
ERIC Educational Resources Information Center
Heafner, Tina; McIntyre, Ellen; Spooner, Melba
2014-01-01
Responding to the challenge of more rigorous and outcome-oriented program evaluation criteria of the Council for the Accreditation of Educator Preparation (CAEP), authors take a critical look at the intersection of two standards: Clinical Partnerships and Practice (Standard 2) and Program Impact (Standard 4). Illustrating one aspect of a secondary…
41 CFR 102-194.5 - What is the Standard and Optional Forms Management Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is the Standard and Optional Forms Management Program? 102-194.5 Section 102-194.5 Public Contracts and Property Management... PROGRAMS 194-STANDARD AND OPTIONAL FORMS MANAGEMENT PROGRAM § 102-194.5 What is the Standard and Optional...
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
Accelerating scientific computations with mixed precision algorithms
NASA Astrophysics Data System (ADS)
Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire
2009-12-01
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less
Low-Loss Materials for Josephson Qubits
2014-10-09
quantum circuit. It also intuitively explains how for a linear circuit the standard results for electrical circuits are obtained, justifying the use of... linear concepts for a weakly non- linear device such as the transmon. It has also become common to use a double sided noise spectrum to represent...loss tangent of large area pad junction. (c) Effective linearized circuit for the double junction, which makes up the admittance $Y$. $L_j$ is the
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... Conservation Program: Treatment of ``Smart'' Appliances in Energy Conservation Standards and Test Procedures... well as in test procedures used to demonstrate compliance with DOE's standards and qualification as an... development of energy conservation standards and test procedures for DOE's Appliance Standards Program and the...
Impact of community tracer teams on treatment outcomes among tuberculosis patients in South Africa.
Bronner, Liza E; Podewils, Laura J; Peters, Annatjie; Somnath, Pushpakanthi; Nshuti, Lorna; van der Walt, Martie; Mametja, Lerole David
2012-08-07
Tuberculosis (TB) indicators in South Africa currently remain well below global targets. In 2008, the National Tuberculosis Program (NTP) implemented a community mobilization program in all nine provinces to trace TB patients that had missed a treatment or clinic visit. Implementation sites were selected by TB program managers and teams liaised with health facilities to identify patients for tracing activities. The objective of this analysis was to assess the impact of the TB Tracer Project on treatment outcomes among TB patients. The study population included all smear positive TB patients registered in the Electronic TB Registry from Quarter 1 2007-Quarter 1 2009 in South Africa. Subdistricts were used as the unit of analysis, with each designated as either tracer (standard TB program plus tracer project) or non-tracer (standard TB program only). Mixed linear regression models were utilized to calculate the percent quarterly change in treatment outcomes and to compare changes in treatment outcomes from Quarter 1 2007 to Quarter 1 2009 between tracer and non-tracer subdistricts. For all provinces combined, the percent quarterly change decreased significantly for default treatment outcomes among tracer subdistricts (-0.031%; p < 0.001) and increased significantly for successful treatment outcomes among tracer subdistricts (0.003%; p = 0.03). A significant decrease in the proportion of patient default was observed for all provinces combined over the time period comparing tracer and non-tracer subdistricts (p = 0.02). Examination in stratified models revealed the results were not consistent across all provinces; significant differences were observed between tracer and non-tracer subdistricts over time in five of nine provinces for treatment default. Community mobilization of teams to trace TB patients that missed a clinic appointment or treatment dose may be an effective strategy to mitigate default rates and improve treatment outcomes. Additional information is necessary to identify best practices and elucidate discrepancies across provinces; these findings will help guide the NTP in optimizing the adoption of tracing activities for TB control.
PROTECT YOUR HEART: A CULTURE-SPECIFIC, MULTIMEDIA CARDIOVASCULAR HEALTH EDUCATION PROGRAM
Shah, Amy; Clayman, Marla L.; Lauderdale, Diane S.; Khurana, Neerja; Glass, Sara; Kandula, Namratha R.
2016-01-01
Objectives South Asians (SAs), the second fastest growing racial/ethnic minority in the United States., have high rates of coronary heart disease (CHD). Few CHD prevention efforts target this population. We developed and tested a culture-specific, multimedia CHD prevention education program in English and Hindi for SAs. Methods Participants were recruited from community organizations in Chicago, IL between June-October 2011. Bilingual interviewers used questionnaires to assess participants’ knowledge and perceptions before and after the patient education program. Change from pre- to post-test score was calculated using a paired t-test. Linear regression was used to determine the association between post-test scores and education and language. Results Participants’ (n=112) average age was 41 years, 67% had more than a high school education, and 50% spoke Hindi. Participants’ mean pre-test score was 15 (Standard Deviation= 4). After the patient education program, post-test scores increased significantly among all participants (post-test score=24, SD=4), including those with limited-English proficiency. Lower education was associated with a lower post-test score (Beta-coefficient= −2.2, 95% CI= −0.68, −3.8) in adjusted regression. Conclusions A culture-specific, multimedia patient education program significantly improved knowledge and perceptions about CHD prevention among SA immigrants. Culturally-salient, multimedia education may be an effective and engaging way to deliver health information to diverse patient populations. PMID:25647363
User document for computer programs for ring-stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1973-01-01
A user manual and related program documentation is presented for six compatible computer programs for structural analysis of axisymmetric shell structures. The programs apply to a common structural model but analyze different modes of structural response. In particular, they are: (1) Linear static response under asymmetric loads; (2) Buckling of linear states under asymmetric loads; (3) Nonlinear static response under axisymmetric loads; (4) Buckling nonlinear states under axisymmetric (5) Imperfection sensitivity of buckling modes under axisymmetric loads; and (6) Vibrations about nonlinear states under axisymmetric loads. These programs treat branched shells of revolution with an arbitrary arrangement of a large number of open branches but with at most one closed branch.
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-31
...We are giving notice of changes to the Program Standards for the chronic wasting disease (CWD) herd certification program. The CWD herd certification program is a voluntary, cooperative program that establishes minimum requirements for the interstate movement of farmed or captive cervids, provisions for participating States to administer Approved State CWD Herd Certification Programs, and provisions for participating herds to become certified as having a low risk of being infected with CWD. The Program Standards provide optional guidance, explanation, and clarification on how to meet the requirements for interstate movement and for the Herd Certification Programs. Recently, we convened a group of State, laboratory, and industry representatives to discuss possible changes to the current Program Standards. The revised Program Standards reflect these discussions, and we believe the revised version will improve understanding of the program among State and industry cooperators. We are making the revised version of the Program Standards available for review and comment.
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Patricia K. Lebow; Henry Spelter; Peter J. Ince
2003-01-01
This report provides documentation and user information for FPL-PELPS, a personal computer price endogenous linear programming system for economic modeling. Originally developed to model the North American pulp and paper industry, FPL-PELPS follows its predecessors in allowing the modeling of any appropriate sector to predict consumption, production and capacity by...
Use of telemedicine in the remote programming of cochlear implants.
Ramos, Angel; Rodriguez, Carina; Martinez-Beneyto, Paz; Perez, Daniel; Gault, Alexandre; Falcon, Juan Carlos; Boyle, Patrick
2009-05-01
Remote cochlear implant (CI) programming is a viable, safe, user-friendly and cost-effective procedure, equivalent to standard programming in terms of efficacy and user's perception, which can complement the standard procedures. The potential benefits of this technique are outlined. We assessed the technical viability, risks and difficulties of remote CI programming; and evaluated the benefits for the user comparing the standard on-site CI programming versus the remote CI programming. The Remote Programming System (RPS) basically consists of completing the habitual programming protocol in a regular CI centre, assisted by local staff, although guided by a remote expert, who programs the CI device using a remote programming station that takes control of the local station through the Internet. A randomized prospective study has been designed with the appropriate controls comparing RPS to the standard on-site CI programming. Study subjects were implanted adults with a HiRes 90K(R) CI with post-lingual onset of profound deafness and 4-12 weeks of device use. Subjects underwent two daily CI programming sessions either remote or standard, on 4 programming days separated by 3 month intervals. A total of 12 remote and 12 standard sessions were completed. To compare both CI programming modes we analysed: program parameters, subjects' auditory progress, subjects' perceptions of the CI programming sessions, and technical aspects, risks and difficulties of remote CI programming. Control of the local station from the remote station was carried out successfully and remote programming sessions were achieved completely and without incidents. Remote and standard program parameters were compared and no significant differences were found between the groups. The performance evaluated in subjects who had been using either standard or remote programs for 3 months showed no significant difference. Subjects were satisfied with both the remote and standard sessions. Safety was proven by checking emergency stops in different conditions. A very small delay was noticed that did not affect the ease of the fitting. The oral and video communication between the local and the remote equipment was established without difficulties and was of high quality.
NASA Technical Reports Server (NTRS)
Deng, Xiaomin; Newman, James C., Jr.
1997-01-01
ZIP2DL is a two-dimensional, elastic-plastic finte element program for stress analysis and crack growth simulations, developed for the NASA Langley Research Center. It has many of the salient features of the ZIP2D program. For example, ZIP2DL contains five material models (linearly elastic, elastic-perfectly plastic, power-law hardening, linear hardening, and multi-linear hardening models), and it can simulate mixed-mode crack growth for prescribed crack growth paths under plane stress, plane strain and mixed state of stress conditions. Further, as an extension of ZIP2D, it also includes a number of new capabilities. The large-deformation kinematics in ZIP2DL will allow it to handle elastic problems with large strains and large rotations, and elastic-plastic problems with small strains and large rotations. Loading conditions in terms of surface traction, concentrated load, and nodal displacement can be applied with a default linear time dependence or they can be programmed according to a user-defined time dependence through a user subroutine. The restart capability of ZIP2DL will make it possible to stop the execution of the program at any time, analyze the results and/or modify execution options and resume and continue the execution of the program. This report includes three sectons: a theoretical manual section, a user manual section, and an example manual secton. In the theoretical secton, the mathematics behind the various aspects of the program are concisely outlined. In the user manual section, a line-by-line explanation of the input data is given. In the example manual secton, three types of examples are presented to demonstrate the accuracy and illustrate the use of this program.
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
Developments on the Toroid Ion Trap Analyzer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lammert, S.A.; Thompson, C.V.; Wise, M.B.
1999-06-13
Investigations into several areas of research have been undertaken to address the performance limitations of the toroid analyzer. The Simion 3D6 (2) ion optics simulation program was used to determine whether the potential well minimum of the toroid trapping field is in the physical center of the trap electrode structure. The results (Figures 1) indicate that the minimum of the potential well is shifted towards the inner ring electrode by an amount approximately equal to 10% of the r0 dimension. A simulation of the standard 3D ion trap under similar conditions was performed as a control. In this case, themore » ions settle to the minimum of the potential well at a point that is coincident with the physical center (both radial and axial) of the trapping electrodes. It is proposed that by using simulation programs, a set of new analyzer electrodes can be fashioned that will correct for the non- linear fields introduced by curving the substantially quadrupolar field about the toroid axis in order to provide a trapping field similar to the 3D ion trap cross- section. A new toroid electrode geometry has been devised to allow the use of channel- tron style detectors in place of the more expensive multichannel plate detector. Two different versions have been designed and constructed - one using the current ion trap cross- section (Figure 2) and another using the linear quedrupole cross- section design first reported by Bier and Syka (3).« less
NASA Astrophysics Data System (ADS)
Salazar, William
2003-01-01
The Standard Advanced Dewar Assembly (SADA) is the critical module in the Department of Defense (DoD) standardization effort of scanning second-generation thermal imaging systems. DoD has established a family of SADA's to address requirements for high performance (SADA I), mid-to-high performance (SADA II), and compact class (SADA III) systems. SADA's consist of the Infrared Focal Plane Array (IRFPA), Dewar, Command and Control Electronics (C&CE), and the cryogenic cooler. SADA's are used in weapons systems such as Comanche and Apache helicopters, the M1 Abrams Tank, the M2 Bradley Fighting Vehicle, the Line of Sight Antitank (LOSAT) system, the Improved Target Acquisition System (ITAS), and Javelin's Command Launch Unit (CLU). DOD has defined a family of tactical linear drive coolers in support of the family of SADA's. The Stirling linear drive cryo-coolers are utilized to cool the SADA's Infrared Focal Plane Arrays (IRFPAs) to their operating cryogenic temperatures. These linear drive coolers are required to meet strict cool-down time requirements along with lower vibration output, lower audible noise, and higher reliability than currently fielded rotary coolers. This paper will (1) outline the characteristics of each cooler, (2) present the status and results of qualification tests, and (3) present the status and test results of efforts to increase linear drive cooler reliability.
Standards for Adult Education ESL Programs
ERIC Educational Resources Information Center
TESOL Press, 2013
2013-01-01
What are the components of a quality education ESL program? TESOL's "Standards for Adult Education ESL Programs" answers this question by defining quality components from a national perspective. Using program indicators in eight distinct areas, the standards can be used to review an existing program or as a guide in setting up a new…
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
Bulzacchelli, Maria T; Vernick, Jon S; Webster, Daniel W; Lees, Peter S J
2007-10-01
To evaluate the impact of the United States' federal Occupational Safety and Health Administration's control of hazardous energy (lockout/tagout) standard on rates of machinery-related fatal occupational injury. The standard, which took effect in 1990, requires employers in certain industries to establish an energy control program and sets minimum criteria for energy control procedures, training, inspections, and hardware. An interrupted time-series design was used to determine the standard's effect on fatality rates. Machinery-related fatalities, obtained from the National Traumatic Occupational Fatalities surveillance system for 1980 through 2001, were used as a proxy for lockout/tagout-related fatalities. Linear regression was used to control for changes in demographic and economic factors. The average annual crude rate of machinery-related fatalities in manufacturing changed little from 1980 to 1989, but declined by 4.59% per year from 1990 to 2001. However, when controlling for demographic and economic factors, the regression model estimate of the standard's effect is a small, non-significant increase of 0.05 deaths per 100 000 production worker full-time equivalents (95% CI -0.14 to 0.25). When fatality rates in comparison groups that should not have been affected by the standard are incorporated into the analysis, there is still no significant change in the rate of machinery-related fatalities in manufacturing. There is no evidence that the lockout/tagout standard decreased fatality rates relative to other trends in occupational safety over the study period. A possible explanation is voluntary use of lockout/tagout by some employers before introduction of the standard and low compliance by other employers after.
Bulzacchelli, Maria T; Vernick, Jon S; Webster, Daniel W; Lees, Peter S J
2007-01-01
Objective To evaluate the impact of the United States' federal Occupational Safety and Health Administration's control of hazardous energy (lockout/tagout) standard on rates of machinery‐related fatal occupational injury. The standard, which took effect in 1990, requires employers in certain industries to establish an energy control program and sets minimum criteria for energy control procedures, training, inspections, and hardware. Design An interrupted time‐series design was used to determine the standard's effect on fatality rates. Machinery‐related fatalities, obtained from the National Traumatic Occupational Fatalities surveillance system for 1980 through 2001, were used as a proxy for lockout/tagout‐related fatalities. Linear regression was used to control for changes in demographic and economic factors. Results The average annual crude rate of machinery‐related fatalities in manufacturing changed little from 1980 to 1989, but declined by 4.59% per year from 1990 to 2001. However, when controlling for demographic and economic factors, the regression model estimate of the standard's effect is a small, non‐significant increase of 0.05 deaths per 100 000 production worker full‐time equivalents (95% CI −0.14 to 0.25). When fatality rates in comparison groups that should not have been affected by the standard are incorporated into the analysis, there is still no significant change in the rate of machinery‐related fatalities in manufacturing. Conclusions There is no evidence that the lockout/tagout standard decreased fatality rates relative to other trends in occupational safety over the study period. A possible explanation is voluntary use of lockout/tagout by some employers before introduction of the standard and low compliance by other employers after. PMID:17916891
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Pease, J M; Morselli, M F
1987-01-01
This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.
Marketing Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for exploratory courses, practical arts courses, and job preparatory programs in marketing offered at the secondary or postsecondary level as a part of Florida's comprehensive vocational education program. Each standard…
41 CFR 101-26.501-2 - Standardized buying programs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.501-2 Standardized buying programs. Wherever... school age passenger. (4) Sedans and station wagons (based on standardized, consolidated requirements...
Marketing Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for exploratory courses, practical arts courses, and job preparatory programs in marketing education offered at the secondary or postsecondary level as a part of Florida's comprehensive vocational education program.…
14 CFR 91.1017 - Amending program manager's management specifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... proposed amendment. (2) The Flight Standards District Office that issued the program manager's management... presented, the Flight Standards District Office that issued the program manager's management specifications... Standards District Office that issued the program manager's management specifications issues an amendment of...
14 CFR 91.1017 - Amending program manager's management specifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... proposed amendment. (2) The Flight Standards District Office that issued the program manager's management... presented, the Flight Standards District Office that issued the program manager's management specifications... Standards District Office that issued the program manager's management specifications issues an amendment of...
Technology Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for exploratory courses, practical arts courses, and job preparatory programs in technology education offered at the secondary or postsecondary level as a part of Florida's comprehensive vocational education program.…
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
Yon, Bethany A; Johnson, Rachel K
2014-03-01
The United States Department of Agriculture's (USDA) new nutrition standards for school meals include sweeping changes setting upper limits on calories served and limit milk offerings to low fat or fat-free and, if flavored, only fat-free. Milk processors are lowering the calories in flavored milks. As changes to milk impact school lunch participation and milk consumption, it is important to know the impact of these modifications. Elementary and middle schools from 17 public school districts that changed from standard flavored milk (160-180 kcal/8 oz) to lower calorie flavored milk (140-150 kcal/8 oz) between 2008 and 2009 were enrolled. Milk shipment and National School Lunch Program (NSLP) participation rates were collected for 3 time periods over 12 months (pre-reformulation, at the time of reformulation, and after reformulation). Linear mixed models were used with adjustments for free/reduced meal eligibility. No changes were seen in shipment of flavored milk or all milk, including unflavored. The NSLP participation rates dropped when lower calorie flavored milk was first offered, but recovered over time. While school children appear to accept lower calorie flavored milk, further monitoring is warranted as most of the flavored milks offered were not fat-free as was required by USDA as of fall 2012. © 2014, American School Health Association.
NASA Astrophysics Data System (ADS)
Lee, Hyunho; Jeong, Seonghoon; Jo, Yunhui; Yoon, Myonggeun
2015-07-01
Quality assurance (QA) for medical linear accelerators is indispensable for appropriate cancer treatment. Some international organizations and advanced Western countries have provided QA guidelines for linear accelerators. Currently, QA regulations for linear accelerators in Korean hospitals specify a system in which each hospital stipulates its independent hospital-based protocols for QA procedures (HP_QAPs) and conducts QA based on those HP_QAPs while regulatory authorities verify whether items under those HP_QAPs have been performed. However, because this regulatory method cannot guarantee the quality of universal treatment and QA items with tolerance criteria are different in many hospitals, the presentation of standardized QA items and tolerance criteria is essential. In this study, QA items in HP_QAPs from various hospitals and those presented by international organizations, such as the International Atomic Energy Agency, the European Union, and the American Association of Physicist in Medicine, and by advanced Western countries, such as the USA, the UK, and Canada, were compared. Concordance rates between QA items for linear accelerators that were presented by the aforementioned organizations and those currently being implemented in Korean hospitals were shown to exhibit a daily QA of 50%, a weekly QA of 22%, a monthly QA of 43%, and an annual QA of 65%, and the overall concordance rates of all QA items were approximately 48%. In the comparison between QA items being implemented in Korean hospitals and those being implemented in advanced Western countries, concordance rates were shown to exhibit a daily QA of 50%, a weekly QA of 33%, a monthly QA of 60%, and an annual QA of 67%, and the overall concordance rates of all QA items were approximately 57%. The results of this study indicate that the HP_QAPs currently implemented by Korean hospitals as QA standards for linear accelerators used in radiation therapy do not meet international standards. If this problem is to be solved, national standardized QA items and procedures for linear accelerators need to be developed.
Program for the solution of multipoint boundary value problems of quasilinear differential equations
NASA Technical Reports Server (NTRS)
1973-01-01
Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
Associations between quality indicators of internal medicine residency training programs
2011-01-01
Background Several residency program characteristics have been suggested as measures of program quality, but associations between these measures are unknown. We set out to determine associations between these potential measures of program quality. Methods Survey of internal medicine residency programs that shared an online ambulatory curriculum on hospital type, faculty size, number of trainees, proportion of international medical graduate (IMG) trainees, Internal Medicine In-Training Examination (IM-ITE) scores, three-year American Board of Internal Medicine Certifying Examination (ABIM-CE) first-try pass rates, Residency Review Committee-Internal Medicine (RRC-IM) certification length, program director clinical duties, and use of pharmaceutical funding to support education. Associations assessed using Chi-square, Spearman rank correlation, univariate and multivariable linear regression. Results Fifty one of 67 programs responded (response rate 76.1%), including 29 (56.9%) community teaching and 17 (33.3%) university hospitals, with a mean of 68 trainees and 101 faculty. Forty four percent of trainees were IMGs. The average post-graduate year (PGY)-2 IM-ITE raw score was 63.1, which was 66.8 for PGY3s. Average 3-year ABIM-CE pass rate was 95.8%; average RRC-IM certification was 4.3 years. ABIM-CE results, IM-ITE results, and length of RRC-IM certification were strongly associated with each other (p < 0.05). PGY3 IM-ITE scores were higher in programs with more IMGs and in programs that accepted pharmaceutical support (p < 0.05). RRC-IM certification was shorter in programs with higher numbers of IMGs. In multivariable analysis, a higher proportion of IMGs was associated with 1.17 years shorter RRC accreditation. Conclusions Associations between quality indicators are complex, but suggest that the presence of IMGs is associated with better performance on standardized tests but decreased duration of RRC-IM certification. PMID:21651768
Recent Enrollment Trends in American Soil Science Programs
NASA Astrophysics Data System (ADS)
Brevik, Eric C.; Abit, Sergio; Brown, David; Dolliver, Holly; Hopkins, David; Lindbo, David; Manu, Andrew; Mbila, Monday; Parikh, Sanjai J.; Schulze, Darrell; Shaw, Joey; Weil, Ray; Weindorf, David
2015-04-01
Soil science student enrollment was on the decline in the United States from the early 1990s through the early 2000s. Overall undergraduate student enrollment in American colleges and universities rose by about 11% over the same time period. This fact created considerable consternation among the American soil science community. As we head into the International Year of Soil, it seemed to be a good time to revisit this issue and examine current enrollment trends. Fourteen universities that offer undergraduate and/or graduate programs in soil science were surveyed for their enrollments over the time period 2007-2014 (the last seven academic years). The 14 schools represent about 20% of the institutions that offer soil science degrees/programs in the United States. Thirteen institutions submitted undergraduate data and 10 submitted graduate data, which was analyzed by individual institution and in aggregate. Simple linear regression was used to find the slope of best-fit trend lines. For individual institutions, a slope of ≥ 0.5 (on average, the school gained 0.5 students per year or more) was considered to be growing enrollment, ≤ -0.5 was considered shrinking enrollment, and between -0.5 and 0.5 was considered to be stable enrollment. For aggregated data, the 0.5 slope standard was multiplied by the number of schools in the aggregated survey to determine whether enrollment was growing, shrinking, or stable. Over the period of the study, six of the 13 schools reporting undergraduate data showed enrollment gains, five of the 13 showed stable enrollments, one of the 13 showed declining enrollments, and one of the 13 discontinued their undergraduate degree program. The linear regression trend line for the undergraduate schools' composite data had a slope of 55.0 students/year (R2 = 0.96), indicating a strong overall trend of undergraduate enrollment growth at these schools. However, the largest school had also seen large growth in enrollment. To ensure that this one institution was not masking an overall declining enrollment trend, the regression was also run with that institution removed. This gave a linear trend line with a slope of 6.6 students/year (R2 = 0.90), indicating more moderate growth but still a trend towards growth in undergraduate enrollment. Four of the 10 graduate programs showed enrollment gains, five of the 10 showed stable enrollments, and one of the 10 showed declining enrollments. The linear regression trend line for the composite graduate school data had a slope of 12.0 students/year (R2 = 0.97), indicating an overall trend of enrollment growth at these schools. As a whole, both the undergraduate and graduate programs investigated showed moderate growth trends, which represent a reversal of enrollment trends reported at the beginning of the 21st Century. Challenges in obtaining the data used for this study included 1) differences in data collection and archiving by institutions and 2) only some schools still offer a soil science degree; many schools offer another degree (e.g., agricultural studies, agronomy, environmental resource science, environmental science, plant and soil science, etc.) with a soils option or emphasis. In the second case it was necessary to identify which students in these other degree programs pursued the soil science option or emphasis.
Public Service Education. Vocational Education Program Courses Standards.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational, Adult, and Community Education.
This document contains vocational education program courses standards (curriculum frameworks and student performance standards) for exploratory courses, practical arts courses, and job preparatory programs in public service education offered at the secondary or postsecondary level as a part of Florida's comprehensive vocational education program.…
Program Helps Standardize Documentation Of Software
NASA Technical Reports Server (NTRS)
Howe, G.
1994-01-01
Intelligent Documentation Management System, IDMS, computer program developed to assist project managers in implementing information system documentation standard known as NASA-STD-2100-91, NASA STD, COS-10300, of NASA's Software Management and Assurance Program. Standard consists of data-item descriptions or templates, each of which governs particular component of software documentation. IDMS helps program manager in tailoring documentation standard to project. Written in C language.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Perspectives in Peer Programs. Volume 28, Number 1, Winter 2018
ERIC Educational Resources Information Center
Tindall, Judith, Ed.; Black, David R., Ed.; Routson, Sue, Ed.
2018-01-01
This issue of "Perspectives in Peer Programs," the official journal of the National Association of Peer Program Professionals (NAPP), includes: (1) Introduction to this Issue on NAPPP Programmatic Standards Checklist, Programmatic Standards, Ethics, and Rubric; (2) NAPPP Programmatic Standards Checklist; (3) NAPPP Programmatic Standards;…
Manual of Accreditation Standards for Adventure Programs 1995.
ERIC Educational Resources Information Center
Williamson, John E., Comp.; Gass, Michael, Comp.
This manual presents standards for adventure education programs seeking accreditation from the Association for Experiential Education. The manual is set up sequentially, focusing both on objective standards such as technical risk management aspects, and on subjective standards such as teaching approaches used in programs. Chapter titles provide…
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
NASA Technical Reports Server (NTRS)
Dillenius, Marnix F. E.
1985-01-01
Program LRCDM2 was developed for supersonic missiles with axisymmetric bodies and up to two finned sections. Predicted are pressure distributions and loads acting on a complete configuration including effects of body separated flow vorticity and fin-edge vortices. The computer program is based on supersonic panelling and line singularity methods coupled with vortex tracking theory. Effects of afterbody shed vorticity on the afterbody and tail-fin pressure distributions can be optionally treated by companion program BDYSHD. Preliminary versions of combined shock expansion/linear theory and Newtonian/linear theory have been implemented as optional pressure calculation methods to extend the Mach number and angle-of-attack ranges of applicability into the nonlinear supersonic flow regime. Comparisons between program results and experimental data are given for a triform tail-finned configuration and for a canard controlled configuration with a long afterbody for Mach numbers up to 2.5. Initial tests of the nonlinear/linear theory approaches show good agreement for pressures acting on a rectangular wing and a delta wing with attached shocks for Mach numbers up to 4.6 and angles of attack up to 20 degrees.
Ideal Standards, Acceptance, and Relationship Satisfaction: Latitudes of Differential Effects
Buyukcan-Tetik, Asuman; Campbell, Lorne; Finkenauer, Catrin; Karremans, Johan C.; Kappen, Gesa
2017-01-01
We examined whether the relations of consistency between ideal standards and perceptions of a current romantic partner with partner acceptance and relationship satisfaction level off, or decelerate, above a threshold. We tested our hypothesis using a 3-year longitudinal data set collected from heterosexual newlywed couples. We used two indicators of consistency: pattern correspondence (within-person correlation between ideal standards and perceived partner ratings) and mean-level match (difference between ideal standards score and perceived partner score). Our results revealed that pattern correspondence had no relation with partner acceptance, but a positive linear/exponential association with relationship satisfaction. Mean-level match had a significant positive association with actor’s acceptance and relationship satisfaction up to the point where perceived partner score equaled ideal standards score. Partner effects did not show a consistent pattern. The results suggest that the consistency between ideal standards and perceived partner attributes has a non-linear association with acceptance and relationship satisfaction, although the results were more conclusive for mean-level match. PMID:29033876
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
ERIC Educational Resources Information Center
Walker, Robert W.; Hemp, Paul E.
A study was made of Phase 1 of the long-term standards program for agricultural occupations programs for Illinois community colleges. The unique feature of this project was the procedure used to maximize the input of community college teachers in the validation and revision of the national standards. Survey instruments were sent to community…
Standards Participation Guidance : ITS Standards Program
DOT National Transportation Integrated Search
2018-04-15
The Intelligent Transportation System Joint Program Office (ITS JPO) focuses on research projects, exploratory studies and deployment support for the intelligent transportation system. The ITS Architecture and Standards Programs are foundational to t...
NASA Technical Reports Server (NTRS)
Heidergott, K. W.
1979-01-01
The computer program known as QR is described. Classical control systems analysis and synthesis (root locus, time response, and frequency response) can be performed using this program. Programming details of the QR program are presented.
NASA Technical Reports Server (NTRS)
Gernhardt, Michael I.; Abercromby, Andrew; Conklin, Johnny
2007-01-01
Conventional saturation decompression protocols use linear decompression rates that become progressively slower at shallower depths, consistent with free gas phase control vs. dissolved gas elimination kinetics. If decompression is limited by control of free gas phase, linear decompression is an inefficient strategy. The NASA prebreathe reduction program demonstrated that exercise during O2 prebreathe resulted in a 50% reduction (2 h vs. 4 h) in the saturation decompression time from 14.7 to 4.3 psi and a significant reduction in decompression sickness (DCS: 0 vs. 23.7%). Combining exercise with intermittent recompression, which controls gas phase growth and eliminates supersaturation before exercising, may enable more efficient saturation decompression schedules. A tissue bubble dynamics model (TBDM) was used in conjunction with a NASA exercise prebreathe model (NEPM) that relates tissue inert gas exchange rate constants to exercise (ml O2/kg-min), to develop a schedule for decompression from helium saturation at 400 fsw. The models provide significant prediction (p < 0.001) and goodness of fit with 430 cases of DCS in 6437 laboratory dives for TBDM (p = 0.77) and with 22 cases of DCS in 159 altitude exposures for NEPM (p = 0.70). The models have also been used operationally in over 25,000 dives (TBDM) and 40 spacewalks (NEPM). The standard U.S. Navy (USN) linear saturation decompression schedule from saturation at 400 fsw required 114.5 h with a maximum Bubble Growth Index (BGI(sub max)) of 17.5. Decompression using intermittent recompression combined with two 10 min exercise periods (75% VO2 (sub peak)) per day required 54.25 h (BGI(sub max): 14.7). Combined intermittent recompression and exercise resulted in a theoretical 53% (2.5 day) reduction in decompression time and theoretically lower DCS risk compared to the standard USN decompression schedule. These results warrant future decompression trials to evaluate the efficacy of this approach.
A candidate reference method using ICP-MS for sweat chloride quantification.
Collie, Jake T; Massie, R John; Jones, Oliver A H; Morrison, Paul D; Greaves, Ronda F
2016-04-01
The aim of the study was to develop a method for sweat chloride (Cl) quantification using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to present to the Joint Committee for Traceability in Laboratory Medicine (JCTLM) as a candidate reference method for the diagnosis of cystic fibrosis (CF). Calibration standards were prepared from sodium chloride (NaCl) to cover the expected range of sweat Cl values. Germanium (Ge) and scandium (Sc) were selected as on-line (instrument based) internal standards (IS) and gallium (Ga) as the off-line (sample based) IS. The method was validated through linearity, accuracy and imprecision studies as well as enrolment into the Royal College of Pathologists of Australasia Quality Assurance Program (RCPAQAP) for sweat electrolyte testing. Two variations of the ICP-MS method were developed, an on-line and off-line IS, and compared. Linearity was determined up to 225 mmol/L with a limit of quantitation of 7.4 mmol/L. The off-line IS demonstrated increased accuracy through the RCPAQAP performance assessment (CV of 1.9%, bias of 1.5 mmol/L) in comparison to the on-line IS (CV of 8.0%, bias of 3.8 mmol/L). Paired t-tests confirmed no significant differences between sample means of the two IS methods (p=0.53) or from each method against the RCPAQAP target values (p=0.08 and p=0.29). Both on and off-line IS methods generated highly reproducible results and excellent linear comparison to the RCPAQAP target results. ICP-MS is a highly accurate method with a low limit of quantitation for sweat Cl analysis and should be recognised as a candidate reference method for the monitoring and diagnosis of CF. Laboratories that currently practice sweat Cl analysis using ICP-MS should include an off-line IS to help negate any pre-analytical errors.
Xu, Yiling; Oh, Heesoo; Lagravère, Manuel O
2017-09-01
The purpose of this study was to locate traditionally-used landmarks in two-dimensional (2D) images and newly-suggested ones in three-dimensional (3D) images (cone-beam computer tomographies [CBCTs]) and determine possible relationships between them to categorize patients with Class II-1 malocclusion. CBCTs from 30 patients diagnosed with Class II-1 malocclusion were obtained from the University of Alberta Graduate Orthodontic Program database. The reconstructed images were downloaded and visualized using the software platform AVIZO ® . Forty-two landmarks were chosen and the coordinates were then obtained and analyzed using linear and angular measurements. Ten images were analyzed three times to determine the reliability and measurement error of each landmark using Intra-Class Correlation coefficient (ICC). Descriptive statistics were done using the SPSS statistical package to determine any relationships. ICC values were excellent for all landmarks in all axes, with the highest measurement error of 2mm in the y-axis for the Gonion Left landmark. Linear and angular measurements were calculated using the coordinates of each landmark. Descriptive statistics showed that the linear and angular measurements used in the 2D images did not correlate well with the 3D images. The lowest standard deviation obtained was 0.6709 for S-GoR/N-Me, with a mean of 0.8016. The highest standard deviation was 20.20704 for ANS-InfraL, with a mean of 41.006. The traditional landmarks used for 2D malocclusion analysis show good reliability when transferred to 3D images. However, they did not reveal specific skeletal or dental patterns when trying to analyze 3D images for malocclusion. Thus, another technique should be considered when classifying 3D CBCT images for Class II-1malocclusion. Copyright © 2017 CEO. Published by Elsevier Masson SAS. All rights reserved.
Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach
NASA Astrophysics Data System (ADS)
Fleury, Benoit; Labbe, Julien
2014-08-01
The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.
An O({radical}nL) primal-dual affine scaling algorithm for linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Siming
1994-12-31
We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…
A Revised Simplex Method for Test Construction Problems. Research Report 90-5.
ERIC Educational Resources Information Center
Adema, Jos J.
Linear programming models with 0-1 variables are useful for the construction of tests from an item bank. Most solution strategies for these models start with solving the relaxed 0-1 linear programming model, allowing the 0-1 variables to take on values between 0 and 1. Then, a 0-1 solution is found by just rounding, optimal rounding, or a…
An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…
A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1980-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.
Development of Standards for Textiles and Clothing Postsecondary Programs. Final Report.
ERIC Educational Resources Information Center
Iowa State Univ. of Science and Technology, Ames. Dept. of Home Economics Education.
A project was conducted to validate program standards and performance standards for four postsecondary occupational areas--fashion merchandising, fashion design, apparel, and window treatment services. Returns from 117 questionnaires from postsecondary institutions in fifty states were used to develop program standards statements and to provide…
Automotive Technology Skill Standards
ERIC Educational Resources Information Center
Garrett, Tom; Asay, Don; Evans, Richard; Barbie, Bill; Herdener, John; Teague, Todd; Allen, Scott; Benshoof, James
2009-01-01
The standards in this document are for Automotive Technology programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school automotive program. Minimally, the student will complete a three-year program to achieve all standards. Although these exit-level standards are designed…
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
24 CFR 200.948 - Building product standards and certification program for carpet cushion.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Minimum Property Standards § 200.948 Building product standards and certification program for carpet... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Building product standards and certification program for carpet cushion. 200.948 Section 200.948 Housing and Urban Development Regulations...
7 CFR 1726.304 - List of electric program standard contract forms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false List of electric program standard contract forms... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES RUS Standard Forms § 1726.304 List of electric program standard contract forms. (a) General. This section...
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Lead-lag relationships between stock and market risk within linear response theory
NASA Astrophysics Data System (ADS)
Borysov, Stanislav; Balatsky, Alexander
2015-03-01
We study historical correlations and lead-lag relationships between individual stock risks (standard deviation of daily stock returns) and market risk (standard deviation of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over stocks, using historical stock prices from the Standard & Poor's 500 index for 1994-2013. The observed historical dynamics suggests that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when individual stock risks affect market risk and vice versa. This work was supported by VR 621-2012-2983.
Liquid-chromatographic determination of cephalosporins and chloramphenicol in serum.
Danzer, L A
1983-05-01
A "high-performance" liquid-chromatographic technique involving a radial compression module is used for measuring chloramphenicol and five cephalosporin antibiotics: cefotaxime, cefoxitin, cephapirin, and cefamandol. Serum proteins are precipitated with acetonitrile solution containing 4'-nitroacetanilide as the internal standard. The drugs are eluted with a mobile phase of methanol/acetate buffer (30/70 by vol), pH 5.5. Absorbance of the cephalosporins is monitored at 254 nm. Standard curves are linear to at least 100 mg/L. The absorbance of chloramphenicol is monitored at 254 nm and 280 nm, and its standard curve is linear to at least 50 mg/L. The elution times for various other drugs were also determined, to check for potential interferents.
Falcone, John L; Gonzalo, Jed D
2014-01-19
To determine Internal Medicine residency program compliance with the Accreditation Council for Graduate Medical Education 80% pass-rate standard and the correlation between residency program size and performance on the American Board of Internal Medicine Certifying Examination. Using a cross-sectional study design from 2010-2012 American Board of Internal Medicine Certifying Examination data of all Internal Medicine residency pro-grams, comparisons were made between program pass rates to the Accreditation Council for Graduate Medical Education pass-rate standard. To assess the correlation between program size and performance, a Spearman's rho was calculated. To evaluate program size and its relationship to the pass-rate standard, receiver operative characteristic curves were calculated. Of 372 Internal Medicine residency programs, 276 programs (74%) achieved a pass rate of =80%, surpassing the Accreditation Council for Graduate Medical Education minimum standard. A weak correlation was found between residency program size and pass rate for the three-year period (p=0.19, p<0.001). The area underneath the receiver operative characteristic curve was 0.69 (95% Confidence Interval [0.63-0.75]), suggesting programs with less than 12 examinees/year are less likely to meet the minimum Accreditation Council for Graduate Medical Education pass-rate standard (sensitivity 63.8%, specificity 60.4%, positive predictive value 82.2%, p<0.001). Although a majority of Internal Medicine residency programs complied with Accreditation Council for Graduate Medical Education pass-rate standards, a quarter of the programs failed to meet this requirement. Program size is positively but weakly associated with American Board of Internal Medicine Certifying Examination performance, suggesting other unidentified variables significantly contribute to program performance.
Linear programming model to develop geodiversity map using utility theory
NASA Astrophysics Data System (ADS)
Sepehr, Adel
2015-04-01
In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.
Determination of optimum values for maximizing the profit in bread production: Daily bakery Sdn Bhd
NASA Astrophysics Data System (ADS)
Muda, Nora; Sim, Raymond
2015-02-01
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. An ILP has many applications in industrial production, including job-shop modelling. A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but variables must be constrained to be integer. It concerned with the optimization of a linear function while satisfying a set of linear equality and inequality constraints and restrictions. It has been used to solve optimization problem in many industries area such as banking, nutrition, agriculture, and bakery and so on. The main purpose of this study is to formulate the best combination of all ingredients in producing different type of bread in Daily Bakery in order to gain maximum profit. This study also focuses on the sensitivity analysis due to changing of the profit and the cost of each ingredient. The optimum result obtained from QM software is RM 65,377.29 per day. This study will be benefited for Daily Bakery and also other similar industries. By formulating a combination of all ingredients make up, they can easily know their total profit in producing bread everyday.
Avionics Maintenance Technology Program Standards.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This publication contains statewide standards for the avionics maintenance technology program in Georgia. The standards are divided into the following categories: foundations, diploma/degree (philosophy, purpose, goals, program objectives, availability, evaluation); admissions, diploma/degree (admission requirements, provisional admission…
Radiologic Technology Program Standards.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Dept. of Vocational Education.
This publication contains statewide standards for the radiologic technology program in Georgia. The standards are divided into 12 categories; Foundations (philosophy, purpose, goals, program objectives, availability, evaluation); Admissions (admission requirements, provisional admission requirements, recruitment, evaluation and planning); Program…
Structural overview and learner control in hypermedia instructional programs
NASA Astrophysics Data System (ADS)
Burke, Patricia Anne
1998-09-01
This study examined the effects of a structural overview and learner control in a computer-based program on the achievement, attitudes, time in program and Linearity of path of fifth-grade students. Four versions of a computer-based instructional program about the Sun and planets were created in a 2 x 2 factorial design. The program consisted of ten sections, one for each planet and one for the Sun. Two structural overview conditions (structural overview, no structural overview) were crossed with two control conditions (learner control, program control). Subjects in the structural overview condition chose the order in which they would learn about the planets from among three options: ordered by distance from the Sun, ordered by size, or ordered by temperature. Subjects in the learner control condition were able to move freely among screens within a section and to choose their next section after finishing the previous one. In contrast, those in the program control condition advanced through the program in a prescribed linear manner. A 2 x 2 ANOVA yielded no significant differences in posttest scores for either independent variable or for their interaction. The structural overview was most likely not effective because subjects spent only a small percentage of their total time on the structural overview screens and they were not required to act upon the information in those screens. Learner control over content sequencing may not have been effective because most learner-control subjects chose the same overall sequence of instruction (i.e., distance from the Sun) prescribed for program-control subjects. Learner-control subjects chose to view an average of 40 more screens than the fixed number of 160 screens in the program-control version. However, program-control subjects spent significantly more time per screen than learner-control subjects, and the total time in program did not differ significantly between the two groups. Learner-control subjects receiving the structural overview deviated from the linear path significantly more often than subjects who did not have the structural overview, but deviation from the linear path was not associated with higher posttest scores.
Welch, Thomas R; Olson, Brad G; Nelsen, Elizabeth; Beck Dallaghan, Gary L; Kennedy, Gloria A; Botash, Ann
2017-09-01
To determine whether training site or prior examinee performance on the US Medical Licensing Examination (USMLE) step 1 and step 2 might predict pass rates on the American Board of Pediatrics (ABP) certifying examination. Data from graduates of pediatric residency programs completing the ABP certifying examination between 2009 and 2013 were obtained. For each, results of the initial ABP certifying examination were obtained, as well as results on National Board of Medical Examiners (NBME) step 1 and step 2 examinations. Hierarchical linear modeling was used to nest first-time ABP results within training programs to isolate program contribution to ABP results while controlling for USMLE step 1 and step 2 scores. Stepwise linear regression was then used to determine which of these examinations was a better predictor of ABP results. A total of 1110 graduates of 15 programs had complete testing results and were subject to analysis. Mean ABP scores for these programs ranged from 186.13 to 214.32. The hierarchical linear model suggested that the interaction of step 1 and 2 scores predicted ABP performance (F[1,1007.70] = 6.44, P = .011). By conducting a multilevel model by training program, both USMLE step examinations predicted first-time ABP results (b = .002, t = 2.54, P = .011). Linear regression analyses indicated that step 2 results were a better predictor of ABP performance than step 1 or a combination of the two USMLE scores. Performance on the USMLE examinations, especially step 2, predicts performance on the ABP certifying examination. The contribution of training site to ABP performance was statistically significant, though contributed modestly to the effect compared with prior USMLE scores. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Johnsen, Susan K., Ed.
2012-01-01
The new Pre-K-Grade 12 Gifted Education Programming Standards should be part of every school district's repertoire of standards to ensure that the learning needs of advanced students are being met. "NAGC Pre-K-Grade 12 Gifted Education Programming Standards: A Guide to Planning and Implementing High-Quality Services" details six standards that…
Parallel implementation of an adaptive and parameter-free N-body integrator
NASA Astrophysics Data System (ADS)
Pruett, C. David; Ingham, William H.; Herman, Ralph D.
2011-05-01
Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.
The Teaching of Ethics and Professionalism in Plastic Surgery Residency: A Cross-Sectional Survey.
Bennett, Katelyn G; Ingraham, John M; Schneider, Lisa F; Saadeh, Pierre B; Vercler, Christian J
2017-05-01
The ethical practice of medicine has always been of utmost importance, and plastic surgery is no exception. The literature is devoid of information on the teaching of ethics and professionalism in plastic surgery. In light of this, a survey was sent to ascertain the status of ethics training in plastic surgery residencies. A 21-question survey was sent from the American Council of Academic Plastic Surgeons meeting to 180 plastic surgery program directors and coordinators via email. Survey questions inquired about practice environment, number of residents, presence of a formal ethics training program, among others. Binary regression was used to determine if any relationships existed between categorical variables, and Poisson linear regression was used to assess relationships between continuous variables. Statistical significance was set at a P value of 0.05. A total of 104 members responded to the survey (58% response rate). Sixty-three percent were program directors, and most (89%) practiced in academic settings. Sixty-two percent in academics reported having a formal training program, and 60% in private practice reported having one. Only 40% of programs with fewer than 10 residents had ethics training, whereas 78% of programs with more than 20 residents did. The odds of having a training program were slightly higher (odds ratio, 1.1) with more residents (P = 0.17). Despite the lack of information in the literature, formal ethics and professionalism training does exist in many plastic surgery residencies, although barriers to implementation do exist. Plastic surgery leadership should be involved in the development of standardized curricula to help overcome these barriers.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
... Interim Staff Guidance on Standard Review Plan, Section 17.4, ``Reliability Assurance Program'' AGENCY... design reliability assurance program (RAP). This ISG updates the guidance provided to the staff in Standard Review Plan (SRP), Section 17.4, ``Reliability Assurance Program,'' of NUREG-0800, ``Standard...
Applying your corporate compliance skills to the HIPAA security standard.
Carter, P I
2000-01-01
Compliance programs are an increasingly hot topic among healthcare providers. These programs establish policies and procedures covering billing, referrals, gifts, confidentiality of patient records, and many other areas. The purpose is to help providers prevent and detect violations of the law. These programs are voluntary, but are also simply good business practice. Any compliance program should now incorporate the Health Insurance Portability and Accountability Act (HIPAA) security standard. Several sets of guidelines for development of compliance programs have been issued by the federal government, and each is directed toward a different type of healthcare provider. These guidelines share certain key features with the HIPAA security standard. This article examines the common areas between compliance programs and the HIPAA security standard to help you to do two very important things: (1) Leverage your resources by combining compliance with the security standard with other legal and regulatory compliance efforts, and (2) apply the lessons learned in developing your corporate compliance program to developing strategies for compliance with the HIPAA security standard.
Modulation Transfer Function (MTF) measurement techniques for lenses and linear detector arrays
NASA Technical Reports Server (NTRS)
Schnabel, J. J., Jr.; Kaishoven, J. E., Jr.; Tom, D.
1984-01-01
Application is the determination of the Modulation Transfer Function (MTF) for linear detector arrays. A system set up requires knowledge of the MTF of the imaging lens. Procedure for this measurement is described for standard optical lab equipment. Given this information, various possible approaches to MTF measurement for linear arrays is described. The knife edge method is then described in detail.
White, Helen E; Hedges, John; Bendit, Israel; Branford, Susan; Colomer, Dolors; Hochhaus, Andreas; Hughes, Timothy; Kamel-Reid, Suzanne; Kim, Dong-Wook; Modur, Vijay; Müller, Martin C; Pagnano, Katia B; Pane, Fabrizio; Radich, Jerry; Cross, Nicholas C P; Labourier, Emmanuel
2013-06-01
Current guidelines for managing Philadelphia-positive chronic myeloid leukemia include monitoring the expression of the BCR-ABL1 (breakpoint cluster region/c-abl oncogene 1, non-receptor tyrosine kinase) fusion gene by quantitative reverse-transcription PCR (RT-qPCR). Our goal was to establish and validate reference panels to mitigate the interlaboratory imprecision of quantitative BCR-ABL1 measurements and to facilitate global standardization on the international scale (IS). Four-level secondary reference panels were manufactured under controlled and validated processes with synthetic Armored RNA Quant molecules (Asuragen) calibrated to reference standards from the WHO and the NIST. Performance was evaluated in IS reference laboratories and with non-IS-standardized RT-qPCR methods. For most methods, percent ratios for BCR-ABL1 e13a2 and e14a2 relative to ABL1 or BCR were robust at 4 different levels and linear over 3 logarithms, from 10% to 0.01% on the IS. The intraassay and interassay imprecision was <2-fold overall. Performance was stable across 3 consecutive lots, in multiple laboratories, and over a period of 18 months to date. International field trials demonstrated the commutability of the reagents and their accurate alignment to the IS within the intra- and interlaboratory imprecision of IS-standardized methods. The synthetic calibrator panels are robust, reproducibly manufactured, analytically calibrated to the WHO primary standards, and compatible with most BCR-ABL1 RT-qPCR assay designs. The broad availability of secondary reference reagents will further facilitate interlaboratory comparative studies and independent quality assessment programs, which are of paramount importance for worldwide standardization of BCR-ABL1 monitoring results and the optimization of current and new therapeutic approaches for chronic myeloid leukemia. © 2013 American Association for Clinical Chemistry.
Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.
1999-01-01
Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
Louisiana Standards for Programs Serving Four-Year-Old Children: Bulletin.
ERIC Educational Resources Information Center
Picard, Cecil J.
As part of Louisiana's efforts to expand and improve the quality of its early childhood programs, a committee of educators from across the state collaborated to develop standards for programs serving 4-year-olds. This guide presents program standards to assist the ongoing development, evaluation, and improvement of early childhood center-based…
25 CFR 36.40 - Standard XIII-Library/media program.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Standard XIII-Library/media program. 36.40 Section 36.40... § 36.40 Standard XIII—Library/media program. (a) Each school shall provide a library/media program... objectives have been met. (2) A written policy for the selection of materials and equipment shall be...
25 CFR 36.40 - Standard XIII-Library/media program.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard XIII-Library/media program. 36.40 Section 36.40... § 36.40 Standard XIII—Library/media program. (a) Each school shall provide a library/media program... developed by a library committee in collaboration with the librarian and be approved by the school board...
25 CFR 36.40 - Standard XIII-Library/media program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard XIII-Library/media program. 36.40 Section 36.40... § 36.40 Standard XIII—Library/media program. (a) Each school shall provide a library/media program... developed by a library committee in collaboration with the librarian and be approved by the school board...
25 CFR 36.40 - Standard XIII-Library/media program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard XIII-Library/media program. 36.40 Section 36.40... § 36.40 Standard XIII—Library/media program. (a) Each school shall provide a library/media program... developed by a library committee in collaboration with the librarian and be approved by the school board...
The Linear Programming to evaluate the performance of Oral Health in Primary Care.
Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; Freitas, Sergio Fernando Torres de
2013-01-01
To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named "quality estimated frontier". Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the "perceived quality frontier". The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Monthly Progress Report No. 60 for April 1948
DOE Office of Scientific and Technical Information (OSTI.GOV)
Various
This report gives a short summary of each of the following programs: (1) 184-inch Cyclotron Program; (2) 60-inch Cyclotron Program; (3) Synchrotron Program; (4) Linear Accelerator Program; (5) Experimental Physics; (6) Theoretical Physics; (7) Chemistry; (8) Medical Physics; and (9) Health Physics and Chemistry.
Satisfying Friendship Maintenance Expectations: The Role of Friendship Standards and Biological Sex
ERIC Educational Resources Information Center
Hall, Jeffrey A.; Larson, Kiley A.; Watts, Amber
2011-01-01
The ideal standards model predicts linear relationship among friendship standards, expectation fulfillment, and relationship satisfaction. Using a diary method, 197 participants reported on expectation fulfillment in interactions with one best, one close, and one casual friend (N = 591) over five days (2,388 interactions). Using multilevel…
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Mahdavifar, Neda; Ghoncheh, Mahshid; Pakzad, Reza; Momenimovahed, Zohre; Salehiniya, Hamid
2016-01-01
Bladder cancer is an international public health problem. It is the ninth most common cancer and the fourteenth leading cause of death due to cancer worldwide. Given aging populations, the incidence of this cancer is rising. Information on the incidence and mortality of the disease, and their relationship with level of economic development is essential for better planning. The aim of the study was to investigate bladder cancer incidence and mortality rates, and their relationship with the the Human Development Index (HDI) in the world. Data were obtained from incidence and mortality rates presented by GLOBOCAN in 2012. Data on HDI and its components were extracted from the global bank site. The number and standardized incidence and mortality rates were reported by regions and the distribution of the disease were drawn in the world. For data analysis, the relationship between incidence and death rates, and HDI and its components was measured using correlation coefficients and SPSS software. The level of significance was set at 0.05. In 2012, 429,793 bladder cancer cases and 165,084 bladder death cases occurred in the world. Five countries that had the highest age-standardized incidence were Belgium 17.5 per 100,000, Lebanon 16.6/100,000, Malta 15.8/100,000, Turkey 15.2/100,000, and Denmark 14.4/100,000. Five countries that had the highest age-standardized death rates were Turkey 6.6 per 100,000, Egypt 6.5/100,000, Iraq 6.3/100,000, Lebanon 6.3/100,000, and Mali 5.2/100,000. There was a positive linear relationship between the standardized incidence rate and HDI (r=0.653, P<0.001), so that there was a positive correlation between the standardized incidence rate with life expectancy at birth, average years of schooling, and the level of income per person of population. A positive linear relationship was also noted between the standardized mortality rate and HDI (r=0.308, P<0.001). There was a positive correlation between the standardized mortality rate with life expectancy at birth, average years of schooling, and the level of income per person of population. The incidence of bladder cancer in developed countries and parts of Africa was higher, while the highest mortality rate was observed in the countries of North Africa and the Middle East. The program for better treatment in developing countries to reduce mortality from the cancer and more detaiuled studies on the etiology of are essential.
A dose-response curve for biodosimetry from a 6 MV electron linear accelerator
Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.
2015-01-01
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fridley, David; Zheng, Nina; Zhou, Nan
Since the late 1970s, energy labeling programs and mandatory energy performance standards have been used in many different countries to improve the efficiency levels of major residential and commercial equipment. As more countries and regions launch programs covering a greater range of products that are traded worldwide, greater attention has been given to harmonizing the specific efficiency criteria in these programs and the test methods for measurements. For example, an international compact fluorescent light (CFL) harmonization initiative was launched in 2006 to focus on collaboration between Australia, China, Europe and North America. Given the long history of standards and labelingmore » programs, most major energy-consuming residential appliances and commercial equipment are already covered under minimum energy performance standards (MEPS) and/or energy labels. For these products, such as clothes washers and CFLs, harmonization may still be possible when national MEPS or labeling thresholds are revised. Greater opportunity for harmonization exists in newer energy-consuming products that are not commonly regulated but are under consideration for new standards and labeling programs. This may include commercial products such as water dispensers and vending machines, which are only covered by MEPS or energy labels in a few countries or regions. As China continues to expand its appliance standards and labeling programs and revise existing standards and labels, it is important to learn from recent international experiences with efficiency criteria and test procedures for the same products. Specifically, various types of standards and labeling programs already exist in North America, Europe and throughout Asia for products in China's 2010 standards and labeling programs, namely clothes washers, water dispensers, vending machines and CFLs. This report thus examines similarities and critical differences in energy efficiency values, test procedure specifications and other technical performance requirements in existing international programs in order to shed light on where Chinese programs currently stands and considerations for their 2010 programs.« less
NASA space station software standards issues
NASA Technical Reports Server (NTRS)
Tice, G. D., Jr.
1985-01-01
The selection and application of software standards present the NASA Space Station Program with the opportunity to serve as a pacesetter for the United States software in the area of software standards. The strengths and weaknesses of each of the NASA defined software standards issues are summerized and discussed. Several significant standards issues are offered for NASA consideration. A challenge is presented for the NASA Space Station Program to serve as a pacesetter for the U.S. Software Industry through: (1) Management commitment to software standards; (2) Overall program participation in software standards; and (3) Employment of the best available technology to support software standards
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
Accelerating Recovery from Poverty: Prevention Effects for Recently Separated Mothers.
Forgatch, Marion S; Degarmo, David S
2007-01-15
This study evaluated benefits of a preventive intervention to the living standards of recently separated mothers. In the Oregon Divorce Study's randomized experimental design, data were collected 5 times over 30 months and evaluated with Hierarchical Linear Growth Models. Relative to their no-intervention control counterparts, experimental mothers had greater improvements in gross annual income, discretionary annual income, poverty threshold, income-to-needs ratios, and financial stress. Comparisons showed the intervention to produce a greater increase in income-to-needs and a greater rise-above-poverty threshold. Benefits to income-to-needs were statistically independent of maternal depressed mood, divorce status, child support, and repartnering. Financial stress reductions were explained by the intervention effect on income-to-needs. The importance of helping disadvantaged families with evidence-based programs is discussed.
Accelerating Recovery from Poverty: Prevention Effects for Recently Separated Mothers
Forgatch, Marion S.; DeGarmo, David S.
2008-01-01
This study evaluated benefits of a preventive intervention to the living standards of recently separated mothers. In the Oregon Divorce Study’s randomized experimental design, data were collected 5 times over 30 months and evaluated with Hierarchical Linear Growth Models. Relative to their no-intervention control counterparts, experimental mothers had greater improvements in gross annual income, discretionary annual income, poverty threshold, income-to-needs ratios, and financial stress. Comparisons showed the intervention to produce a greater increase in income-to-needs and a greater rise-above-poverty threshold. Benefits to income-to-needs were statistically independent of maternal depressed mood, divorce status, child support, and repartnering. Financial stress reductions were explained by the intervention effect on income-to-needs. The importance of helping disadvantaged families with evidence-based programs is discussed. PMID:19043620
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
ERIC Educational Resources Information Center
Stohlmann, Micah Stephen
2012-01-01
This case study explored the impact of a standards-based mathematics and pedagogy class on preservice elementary teachers' beliefs and conceptual subject matter knowledge of linear functions. The framework for the standards-based mathematics and pedagogy class in this study involved the National Council of Teachers of Mathematics Standards,…
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
A preliminary look at techniques used to obtain airdata from flight at high angles of attack
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Whitmore, Stephen A.
1990-01-01
Flight research at high angles of attack has posed new problems for airdata measurements. New sensors and techniques for measuring the standard airdata quantities of static pressure, dynamic pressure, angle of attack, and angle of sideslip were subsequently developed. The ongoing airdata research supporting NASA's F-18 high alpha research program is updated. Included are the techniques used and the preliminary results. The F-18 aircraft was flown with three research airdata systems: a standard airdata probe on the right wingtip, a self-aligning airdata probe on the left wingtip, and a flush airdata system on the nose cone. The primary research goal was to obtain steady-state calibrations for each airdata system up to an angle of attack of 50 deg. This goal was accomplished and preliminary accuracies of the three airdata systems were assessed and are presented. An effort to improve the fidelity of the airdata measurements during dynamic maneuvering is also discussed. This involved enhancement of the aerodynamic data with data obtained from linear accelerometers, rate gyros, and attitude gyros. Preliminary results of this technique are presented.
Hospitality and Tourism Education Skill Standards: Grade 12
ERIC Educational Resources Information Center
Underwood, Ryan; Spann, Lynda; Erickson, Karin; Povilaitis, Judy; Menditto, Louis; Jones, Terri; Sario, Vivienne; Verbeck, Kimberlee; Jacobi, Katherine; Michnal, Kenneth; Shelton-Meader, Sheree; Richens, Greg; Jones, Karin Erickson; Tighe, Denise; Wilhelm, Lee; Scott, Melissa
2010-01-01
The standards in this document are for Hospitality and Tourism programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school program. Minimally, the student will complete a two-year program to achieve all standards. The Hospitality and Tourism Standards Writing Team followed…
NASA Technical Reports Server (NTRS)
Jamison, J. W.
1994-01-01
CFORM was developed by the Kennedy Space Center Robotics Lab to assist in linear control system design and analysis using closed form and transient response mechanisms. The program computes the closed form solution and transient response of a linear (constant coefficient) differential equation. CFORM allows a choice of three input functions: the Unit Step (a unit change in displacement); the Ramp function (step velocity); and the Parabolic function (step acceleration). It is only accurate in cases where the differential equation has distinct roots, and does not handle the case for roots at the origin (s=0). Initial conditions must be zero. Differential equations may be input to CFORM in two forms - polynomial and product of factors. In some linear control analyses, it may be more appropriate to use a related program, Linear Control System Design and Analysis (KSC-11376), which uses root locus and frequency response methods. CFORM was written in VAX FORTRAN for a VAX 11/780 under VAX VMS 4.7. It has a central memory requirement of 30K. CFORM was developed in 1987.
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
Craniofacial changes in Icelandic children between 6 and 16 years of age - a longitudinal study.
Thordarson, Arni; Johannsdottir, Berglind; Magnusson, Thordur Eydal
2006-04-01
The aim of the present study was to describe the craniofacial changes between 6 and 16 years of age in a sample of Icelandic children. Complete sets of lateral cephalometric radiographs were available from 95 males and 87 females. Twenty-two reference points were digitized and processed by standard methods, using the Dentofacial Planner computer software program. Thirty-three angular and linear variables were calculated, including: basal sagittal and vertical measurements, facial ratio, and dental, cranial base and mandibular measurements. For the angular measurements, gender differences were not statistically different for any of the measurements, in either age group, except for the variable s-n-na, which was larger in the 16-year-old boys (P < or = 0.001). Linear variables were consistently larger in the boys compared with the girls at both age levels. During the observation period mandibular prognathism increased but the basal sagittal jaw relationship, the jaw angle, the mandibular plane angle and cranial base flexure (n-s-ba) decreased in both genders (P < or = 0.001). Maxillary prognathism increased only in the boys from 6 to 16 years. Inclination of the lower incisors and all the cranial base dimensions increased in both genders during the observation period. When the Icelandic sample was compared with a similar Norwegian sample, small differences could be noted in the maxillary prognathism, mandibular plane angle and in the inclination of the maxilla. Larger differences were identified in the inclination of the lower incisors. These findings could be used as normative cephalometric standards for 6- and 16-year-old Icelandic children.
NIST Mechanisms for Disseminating Measurements
Gills, T. E.; Dittman, S.; Rumble, J. R.; Brickenkamp, C. S.; Harris, G. L.; Trahey, N. M.
2001-01-01
The national responsibilities assigned to the National Bureau of Standards (NBS) early in the last century for providing measurement assistance and service are carried out today by the four programs that comprise the National Institute of Standards and Technology (NIST) Office of Measurement Services (OMS). They are the Calibration Program (CP), the Standard Reference Materials Program (SRMP), the Standard Reference Data Program (SRDP), and the Weights and Measures Program (W&MP). Organized when the U.S. Congress changed the NBS name to NIST, the OMS facilitates access to the measurement and standards activities of NIST laboratories and programs through the dissemination of NIST products, data, and services. A brief historical introduction followed by a perspective of pivotal measurement developments from 1901 to the present and concluding with a look to the future of NIST measurement services in the next decade of the new millennium are presented for each OMS program. PMID:27500025
Teaching Machines and Programmed Instruction.
ERIC Educational Resources Information Center
Kay, Harry; And Others
The various devices used in programed instruction range from the simple linear programed book to branching and skip branching programs, adaptive teaching machines, and even complex computer based systems. In order to provide a background for the would-be programer, the essential principles of each of these devices is outlined. Different ideas of…
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
Jiang, Hua; Yang, Jing; Fan, Li; Li, Fengmin; Huang, Qiliang
2013-01-01
The toxic inert ingredients in pesticide formulations are strictly regulated in many countries. In this paper, a simple and efficient headspace-gas chromatography-mass spectrometry (HSGC-MS) method using fluorobenzene as an internal standard (IS) for rapid simultaneous determination of benzene and toluene in pesticide emulsifiable concentrate (EC) was established. The headspace and GC-MS conditions were investigated and developed. A nonpolar fused silica Rtx-5 capillary column (30 m × 0.20 mm i.d. and 0.25 μm film thickness) with temperature programming was used. Under optimized headspace conditions, equilibration temperature of 120°C, equilibration time of 5 min, and sample size of 50 μL, the regression of the peak area ratios of benzene and toluene to IS on the concentrations of analytes fitted a linear relationship well at the concentration levels ranging from 3.2 g/L to 16.0 g/L. Standard additions of benzene and toluene to blank different matrix solutions 1ead to recoveries of 100.1%–109.5% with a relative standard deviation (RSD) of 0.3%–8.1%. The method presented here stands out as simple and easily applicable, which provides a way for the determination of toxic volatile adjuvant in liquid pesticide formulations. PMID:23607048
PIFCGT: A PIF autopilot design program for general aviation aircraft
NASA Technical Reports Server (NTRS)
Broussard, J. R.
1983-01-01
This report documents the PIFCGT computer program. In FORTRAN, PIFCGT is a computer design aid for determing Proportional-Integral-Filter (PIF) control laws for aircraft autopilots implemented with a Command Generator Tracker (CGT). The program uses Linear-Quadratic-Regulator synthesis algorithms to determine feedback gains, and includes software to solve the feedforward matrix equation which is useful in determining the command generator tracker feedforward gains. The program accepts aerodynamic stability derivatives and computes the corresponding aerodynamic linear model. The nine autopilot modes that can be designed include four maneuver modes (ROLL SEL, PITCH SEL, HDG SEL, ALT SEL), four final approach models (APR GS, APR LOCI, APR LOCR, APR LOCP), and a BETA HOLD mode. The program has been compiled and executed on a CDC computer.
Neoclassical transport including collisional nonlinearity.
Candy, J; Belli, E A
2011-06-10
In the standard δf theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction δf is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlüter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.
NASA Technical Reports Server (NTRS)
Magnus, A. E.; Epton, M. A.
1981-01-01
Panel aerodynamics (PAN AIR) is a system of computer programs designed to analyze subsonic and supersonic inviscid flows about arbitrary configurations. A panel method is a program which solves a linear partial differential equation by approximating the configuration surface by a set of panels. An overview of the theory of potential flow in general and PAN AIR in particular is given along with detailed mathematical formulations. Fluid dynamics, the Navier-Stokes equation, and the theory of panel methods were also discussed.
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
Wilson, Ryan B; Siegler, W Christopher; Hoggard, Jamin C; Fitz, Brian D; Nadeau, Jeremy S; Synovec, Robert E
2011-05-27
By taking into consideration band broadening theory and using those results to select experimental conditions, and also by reducing the injection pulse width, peak capacity production (i.e., peak capacity per separation time) is substantially improved for one dimensional (1D-GC) and comprehensive two dimensional (GC×GC) gas chromatography. A theoretical framework for determining the optimal linear gas velocity (the linear gas velocity producing the minimum H), from experimental parameters provides an in-depth understanding of the potential for GC separations in the absence of extra-column band broadening. The extra-column band broadening is referred to herein as off-column band broadening since it is additional band broadening not due to the on-column separation processes. The theory provides the basis to experimentally evaluate and improve temperature programmed 1D-GC separations, but in order to do so with a commercial 1D-GC instrument platform, off-column band broadening from injection and detection needed to be significantly reduced. Specifically for injection, a resistively heated transfer line is coupled to a high-speed diaphragm valve to provide a suitable injection pulse width (referred to herein as modified injection). Additionally, flame ionization detection (FID) was modified to provide a data collection rate of 5kHz. The use of long, relatively narrow open tubular capillary columns and a 40°C/min programming rate were explored for 1D-GC, specifically a 40m, 180μm i.d. capillary column operated at or above the optimal average linear gas velocity. Injection using standard auto-injection with a 1:400 split resulted in an average peak width of ∼1.5s, hence a peak capacity production of 40peaks/min. In contrast, use of modified injection produced ∼500ms peak widths for 1D-GC, i.e., a peak capacity production of 120peaks/min (a 3-fold improvement over standard auto-injection). Implementation of modified injection resulted in retention time, peak width, peak height, and peak area average RSD%'s of 0.006, 0.8, 3.4, and 4.0%, respectively. Modified injection onto the first column of a GC×GC coupled with another high-speed valve injection onto the second column produced an instrument with high peak capacity production (500-800peaks/min), ∼5-fold to 8-fold higher than typically reported for GC×GC. Copyright © 2011 Elsevier B.V. All rights reserved.
From master slave interferometry to complex master slave interferometry: theoretical work
NASA Astrophysics Data System (ADS)
Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian
2018-03-01
A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.
Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada
2018-03-08
To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.
Caution on the use of liquid nitrogen traps in stable hydrogen isotope-ratio mass spectrometry
Coplen, T.B.; Qi, H.
2010-01-01
An anomalous stable hydrogen isotopic fractionation of 4 ??? in gaseous hydrogen has been correlated with the process of adding liquid nitrogen (LN2) to top off the dewar of a stainless-steel water trap on a gaseous hydrogen-water platinum equilibration system. Although the cause of this isotopic fractionation is unknown, its effect can be mitigated by (1) increasing the capacity of any dewars so that they do not need to be filled during a daily analytic run, (2) interspersing isotopic reference waters among unknowns, and (3) applying a linear drift correction and linear normalization to isotopic results with a program such as Laboratory Information Management System (LIMS) for Light Stable Isotopes. With adoption of the above guidelines, measurement uncertainty can be substantially improved. For example, the long-term (months to years) ??2H reproducibility (1?? standard deviation) of nine local isotopic reference waters analyzed daily improved substantially from about 1 ??? to 0.58 ???. This isotopically fractionating mechanism might affect other isotope-ratio mass spectrometers in which LN2 is used as a moisture trap for gaseous hydrogen. ?? This article not subject to U.S. Copyright. Published 2010 by the American Chemical Society.
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.