Science.gov

Sample records for linear mixed-integer models

  1. Learning oncogenetic networks by reducing to mixed integer linear programming.

    PubMed

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  2. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    PubMed

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  3. Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming

    DTIC Science & Technology

    2016-01-01

    TECHNICAL REPORT NSWC PCD TR 2015-003 OPTIMIZED WATERSPACE MANAGEMENT AND SCHEDULING USING MIXED-INTEGER LINEAR PROGRAMMING...effects on optimization quality . 24 3 NSWC PCD TR 2015-003 Optimized Waterspace Mgt 1 Introduction The use of autonomous systems to perform increasingly...constraints required for the mathematical formulation of the MCM scheduling problem pertaining to the survey constraints and logistics management . The

  4. A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad

    2010-01-01

    Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.

  5. Mixed Integer Linear Programming based machine learning approach identifies regulators of telomerase in yeast.

    PubMed

    Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer

    2016-06-02

    Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments.

  6. An inexact two-stage mixed integer linear programming method for solid waste management in the City of Regina.

    PubMed

    Li, Y P; Huang, G H

    2006-11-01

    In this study, an interval-parameter two-stage mixed integer linear programming (ITMILP) model is developed for supporting long-term planning of waste management activities in the City of Regina. In the ITMILP, both two-stage stochastic programming and interval linear programming are introduced into a general mixed integer linear programming framework. Uncertainties expressed as not only probability density functions but also discrete intervals can be reflected. The model can help tackle the dynamic, interactive and uncertain characteristics of the solid waste management system in the City, and can address issues concerning plans for cost-effective waste diversion and landfill prolongation. Three scenarios are considered based on different waste management policies. The results indicate that reasonable solutions have been generated. They are valuable for supporting the adjustment or justification of the existing waste flow allocation patterns, the long-term capacity planning of the City's waste management system, and the formulation of local policies and regulations regarding waste generation and management.

  7. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  8. A mixed integer bi-level DEA model for bank branch performance evaluation by Stackelberg approach

    NASA Astrophysics Data System (ADS)

    Shafiee, Morteza; Lotfi, Farhad Hosseinzadeh; Saleh, Hilda; Ghaderi, Mehdi

    2016-11-01

    One of the most complicated decision making problems for managers is the evaluation of bank performance, which involves various criteria. There are many studies about bank efficiency evaluation by network DEA in the literature review. These studies do not focus on multi-level network. Wu (Eur J Oper Res 207:856-864, 2010) proposed a bi-level structure for cost efficiency at the first time. In this model, multi-level programming and cost efficiency were used. He used a nonlinear programming to solve the model. In this paper, we have focused on multi-level structure and proposed a bi-level DEA model. We then used a liner programming to solve our model. In other hand, we significantly improved the way to achieve the optimum solution in comparison with the work by Wu (2010) by converting the NP-hard nonlinear programing into a mixed integer linear programming. This study uses a bi-level programming data envelopment analysis model that embodies internal structure with Stackelberg-game relationships to evaluate the performance of banking chain. The perspective of decentralized decisions is taken in this paper to cope with complex interactions in banking chain. The results derived from bi-level programming DEA can provide valuable insights and detailed information for managers to help them evaluate the performance of the banking chain as a whole using Stackelberg-game relationships. Finally, this model was applied in the Iranian bank to evaluate cost efficiency.

  9. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  10. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  11. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  12. PySP : modeling and solving stochastic mixed-integer programs in Python.

    SciTech Connect

    Woodruff, David L.; Watson, Jean-Paul

    2010-08-01

    Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.

  13. Mixed Integer Programming Model and Incremental Optimization for Delivery and Storage Planning Using Truck Terminals

    NASA Astrophysics Data System (ADS)

    Sakakibara, Kazutoshi; Tian, Yajie; Nishikawa, Ikuko

    We discuss the planning of transportation by trucks over a multi-day period. Each truck collects loads from suppliers and delivers them to assembly plants or a truck terminal. By exploiting the truck terminal as a temporal storage, we aim to increase the load ratio of each truck and to minimize the lead time for transportation. In this paper, we show a mixed integer programming model which represents each product explicitly, and discuss the decomposition of the problem into a problem of delivery and storage, and a problem of vehicle routing. Based on this model, we propose a relax-and-fix type heuristic in which decision variables are fixed one by one by mathematical programming techniques such as branch-and-bound methods.

  14. COMSAT: Residue contact prediction of transmembrane proteins based on support vector machines and mixed integer linear programming.

    PubMed

    Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A

    2016-03-01

    In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/.

  15. Mixed integer programming model for optimizing the layout of an ICU vehicle

    PubMed Central

    2009-01-01

    Background This paper presents a Mixed Integer Programming (MIP) model for designing the layout of the Intensive Care Units' (ICUs) patient care space. In particular, this MIP model was developed for optimizing the layout for materials to be used in interventions. This work was developed within the framework of a joint project between the Madrid Technical Unverstity and the Medical Emergency Services of the Madrid Regional Government (SUMMA 112). Methods The first task was to identify the relevant information to define the characteristics of the new vehicles and, in particular, to obtain a satisfactory interior layout to locate all the necessary materials. This information was gathered from health workers related to ICUs. With that information an optimization model was developed in order to obtain a solution. From the MIP model, a first solution was obtained, consisting of a grid to locate the different materials needed for the ICUs. The outcome from the MIP model was discussed with health workers to tune the solution, and after slightly altering that solution to meet some requirements that had not been included in the mathematical model, the eventual solution was approved by the persons responsible for specifying the characteristics of the new vehicles. According to the opinion stated by the SUMMA 112's medical group responsible for improving the ambulances (the so-called "coaching group"), the outcome was highly satisfactory. Indeed, the final design served as a basis to draw up the requirements of a public tender. Results As a result from solving the Optimization model, a grid was obtained to locate the different necessary materials for the ICUs. This grid had to be slightly altered to meet some requirements that had not been included in the mathematical model. The results were discussed with the persons responsible for specifying the characteristics of the new vehicles. Conclusion The outcome was highly satisfactory. Indeed, the final design served as a basis

  16. Models and Algorithms Involving Very Large Scale Stochastic Mixed-Integer Programs

    DTIC Science & Technology

    2011-02-28

    give rise to a non - convex and discontinuous recourse function that may be difficult to optimize . As a result of this project, there have been... convex , the master problem in (3.1.6)-(3.1.9) is a non - convex mixed-integer program, and as indicated in [C.1], this approach is not scalable without...the first stage would result in a Benders’ master program which is non - convex , leading to a problem that is not any easier than (3.1.5). Nevertheless

  17. High-Speed Planning Method for Cooperative Logistics Networks using Mixed Integer Programming Model and Dummy Load

    NASA Astrophysics Data System (ADS)

    Onoyama, Takashi; Kubota, Sen; Maekawa, Takuya; Komoda, Norihisa

    Adequate response performance is required for the planning of a cooperative logistic network covering multiple enterprises, because this process needs a human expert's evaluation from many aspects. To satisfy this requirement, we propose an accurate model based on mixed integer programming for optimizing cooperative logistics networks where “round transportation” exists together with “depot transportation” including lower limit constraints of loading ratio for round transportation vehicles. Furthermore, to achieve interactive response performance, a dummy load is introduced into the model instead of integer variables. The experimental result shows the proposed method obtains an accurate solution within interactive response time.

  18. MISO - Mixed Integer Surrogate Optimization

    SciTech Connect

    Mueller, Juliane

    2016-01-20

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  19. Mixed integer nonlinear programming model of wireless pricing scheme with QoS attribute of bandwidth and end-to-end delay

    NASA Astrophysics Data System (ADS)

    Irmeilyana, Puspita, Fitri Maya; Indrawati

    2016-02-01

    The pricing for wireless networks is developed by considering linearity factors, elasticity price and price factors. Mixed Integer Nonlinear Programming of wireless pricing model is proposed as the nonlinear programming problem that can be solved optimally using LINGO 13.0. The solutions are expected to give some information about the connections between the acceptance factor and the price. Previous model worked on the model that focuses on bandwidth as the QoS attribute. The models attempt to maximize the total price for a connection based on QoS parameter. The QoS attributes used will be the bandwidth and the end to end delay that affect the traffic. The maximum goal to maximum price is achieved when the provider determine the requirement for the increment or decrement of price change due to QoS change and amount of QoS value.

  20. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach

    PubMed Central

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.

    2015-01-01

    Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881

  1. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    door to stochastic optimization models, which are typically dynamic in nature. This project lays the foundation for stochastic dynamic mixed...project lays the foundation for stochastic dynamic mixed-integer and linear programming (SD-MIP). This project has produced several new ideas in...models. Recent research has opened the door to stochastic optimization models, which are typically dynamic in nature. This project lays the foundation for

  2. A Mixed Integer Programming Model for Improving Theater Distribution Force Flow Analysis

    DTIC Science & Technology

    2013-03-01

    the introduction to LINGO in OPER 510. Next, I wish to thank LINDO Systems, particularly Kevin Cunningham, for software assistance with LINGO . I...viii Appendix A. LINGO 13 Settings File Contents .............................................................. 79 Appendix B. Additional Model...optimization software LINGO 13 (Lindo Systems Inc, 2012). A Decision Support System was built in the Excel environment where the user uploads a

  3. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  4. Magnetic properties of mixed integer and half-integer spins in a Blume-Capel model: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Masrour, R.; Jabar, A.; Bahmad, L.; Hamedoun, M.; Benyoussef, A.

    2017-01-01

    In this paper, we study the magnetic properties of ferrimagnetic mixed spins with integer σ = 2 and half-integer S = 7 / 2 in a Blume-Capel model, using Monte Carlo simulations. The considered Hamiltonian includes the first nearest-neighbors and the exchange coupling interactions on the two sub-lattices. The effect of these coupling exchange interactions, in the presence of both the external magnetic field and the crystal field, are studied. The magnetizations and the corresponding susceptibilities are presented and discussed. Finally, we have interpreted the behaviors of the magnetic hysteresis of this model.

  5. Optimal planning of co-firing alternative fuels with coal in a power plant by grey nonlinear mixed integer programming model.

    PubMed

    Ko, Andi Setiady; Chang, Ni-Bin

    2008-07-01

    Energy supply and use is of fundamental importance to society. Although the interactions between energy and environment were originally local in character, they have now widened to cover regional and global issues, such as acid rain and the greenhouse effect. It is for this reason that there is a need for covering the direct and indirect economic and environmental impacts of energy acquisition, transport, production and use. In this paper, particular attention is directed to ways of resolving conflict between economic and environmental goals by encouraging a power plant to consider co-firing biomass and refuse-derived fuel (RDF) with coal simultaneously. It aims at reducing the emission level of sulfur dioxide (SO(2)) in an uncertain environment, using the power plant in Michigan City, Indiana as an example. To assess the uncertainty by a comparative way both deterministic and grey nonlinear mixed integer programming (MIP) models were developed to minimize the net operating cost with respect to possible fuel combinations. It aims at generating the optimal portfolio of alternative fuels while maintaining the same electricity generation simultaneously. To ease the solution procedure stepwise relaxation algorithm was developed for solving the grey nonlinear MIP model. Breakeven alternative fuel value can be identified in the post-optimization stage for decision-making. Research findings show that the inclusion of RDF does not exhibit comparative advantage in terms of the net cost, albeit relatively lower air pollution impact. Yet it can be sustained by a charge system, subsidy program, or emission credit as the price of coal increases over time.

  6. Simulation and Mixed Integer Linear Programming Models for Analysis of Semi-Automated Mail Processing

    DTIC Science & Technology

    1989-12-01

    profiles pay[] hourly wage rate for operators on machine m */ FILE * infp ,*outfpl,*outfp2,*outfp3,*outfp4,*outfp5,*outfp6,*fopen(; main () { /* open I/O...files and assign pointers */ infp =fopen("GMFA.DAT","r"); outfp5=fopen( "XL5.DAT’","w"); /* read in input stream data from "GHFA.DAT" * rddstrm...and closing the files */ /* * fprintf(outfp4,"$ MAXIMIZE IITWSTS\

  7. Item Pool Construction Using Mixed Integer Quadratic Programming (MIQP). GMAC® Research Report RR-14-01

    ERIC Educational Resources Information Center

    Han, Kyung T.; Rudner, Lawrence M.

    2014-01-01

    This study uses mixed integer quadratic programming (MIQP) to construct multiple highly equivalent item pools simultaneously, and compares the results from mixed integer programming (MIP). Three different MIP/MIQP models were implemented and evaluated using real CAT item pool data with 23 different content areas and a goal of equal information…

  8. Mixed-Integer Formulations for Constellation Scheduling

    NASA Astrophysics Data System (ADS)

    Valicka, C.; Hart, W.; Rintoul, M.

    Remote sensing systems have expanded the set of capabilities available for and critical to national security. Cooperating, high-fidelity sensing systems and growing mission applications have exponentially increased the set of potential schedules. A definitive lack of advanced tools places an increased burden on operators, as planning and scheduling remain largely manual tasks. This is particularly true in time-critical planning activities where operators aim to accomplish a large number of missions through optimal utilization of single or multiple sensor systems. Automated scheduling through identification and comparison of alternative schedules remains a challenging problem applicable across all remote sensing systems. Previous approaches focused on a subset of sensor missions and do not consider ad-hoc tasking. We have begun development of a robust framework that leverages the Pyomo optimization modeling language for the design of a tool to assist sensor operators planning under the constraints of multiple concurrent missions and uncertainty. Our scheduling models have been formulated to address the stochastic nature of ad-hoc tasks inserted under a variety of scenarios. Operator experience is being leveraged to select appropriate model objectives. Successful development of the framework will include iterative development of high-fidelity mission models that consider and expose various schedule performance metrics. Creating this tool will aid time-critical scheduling by increasing planning efficiency, clarifying the value of alternative modalities uniquely provided by multi-sensor systems, and by presenting both sets of organized information to operators. Such a tool will help operators more quickly and fully utilize sensing systems, a high interest objective within the current remote sensing operations community. Preliminary results for mixed-integer programming formulations of a sensor scheduling problem will be presented. Assumptions regarding sensor geometry

  9. Solution of Mixed-Integer Programming Problems on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Busch, Ingrid Karin; Hilliard, Michael R; Middleton, Richard S; Schultze, Michael

    2009-01-01

    In this paper, we describe our experience with solving difficult mixed-integer linear programming problems (MILPs) on the petaflop Cray XT5 system at the National Center for Computational Sciences at Oak Ridge National Laboratory. We describe the algorithmic, software, and hardware needs for solving MILPs and present the results of using PICO, an open-source, parallel, mixed-integer linear programming solver developed at Sandia National Laboratories, to solve canonical MILPs as well as problems of interest arising from the logistics and supply chain management field.

  10. Finding community structures in complex networks using mixed integer optimisation

    NASA Astrophysics Data System (ADS)

    Xu, G.; Tsoka, S.; Papageorgiou, L. G.

    2007-11-01

    The detection of community structure has been used to reveal the relationships between individual objects and their groupings in networks. This paper presents a mathematical programming approach to identify the optimal community structures in complex networks based on the maximisation of a network modularity metric for partitioning a network into modules. The overall problem is formulated as a mixed integer quadratic programming (MIQP) model, which can then be solved to global optimality using standard optimisation software. The solution procedure is further enhanced by developing special symmetry-breaking constraints to eliminate equivalent solutions. It is shown that additional features such as minimum/maximum module size and balancing among modules can easily be incorporated in the model. The applicability of the proposed optimisation-based approach is demonstrated by four examples. Comparative results with other approaches from the literature show that the proposed methodology has superior performance while global optimum is guaranteed.

  11. Optimized oral cholera vaccine distribution strategies to minimize disease incidence: A mixed integer programming model and analysis of a Bangladesh scenario.

    PubMed

    Smalley, Hannah K; Keskinocak, Pinar; Swann, Julie; Hinman, Alan

    2015-11-17

    In addition to improved sanitation, hygiene, and better access to safe water, oral cholera vaccines can help to control the spread of cholera in the short term. However, there is currently no systematic method for determining the best allocation of oral cholera vaccines to minimize disease incidence in a population where the disease is endemic and resources are limited. We present a mathematical model for optimally allocating vaccines in a region under varying levels of demographic and incidence data availability. The model addresses the questions of where, when, and how many doses of vaccines to send. Considering vaccine efficacies (which may vary based on age and the number of years since vaccination), we analyze distribution strategies which allocate vaccines over multiple years. Results indicate that, given appropriate surveillance data, targeting age groups and regions with the highest disease incidence should be the first priority, followed by other groups primarily in order of disease incidence, as this approach is the most life-saving and cost-effective. A lack of detailed incidence data results in distribution strategies which are not cost-effective and can lead to thousands more deaths from the disease. The mathematical model allows for what-if analysis for various vaccine distribution strategies by providing the ability to easily vary parameters such as numbers and sizes of regions and age groups, risk levels, vaccine price, vaccine efficacy, production capacity and budget.

  12. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  13. Distributed mixed-integer fuzzy hierarchical programming for municipal solid waste management. Part I: System identification and methodology development.

    PubMed

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei

    2017-03-01

    Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.

  14. Constrained spacecraft reorientation using mixed integer convex programming

    NASA Astrophysics Data System (ADS)

    Tam, Margaret; Glenn Lightsey, E.

    2016-10-01

    A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.

  15. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  16. Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization

    DTIC Science & Technology

    2014-08-01

    infocenter/cosinfoc/v12r2/topic/com.ibm. common.doc/doc/banner.htm [18] “ GNU linear programming kit.” [Online]. Available: http://www.gnu. org/software...planning footstep placements for a robot walking on uneven terrain with obsta- cles, using a mixed-integer quadratically-constrained quadratic program ...CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING

  17. Inexact fuzzy-stochastic mixed-integer programming approach for long-term planning of waste management--Part A: methodology.

    PubMed

    Guo, P; Huang, G H

    2009-01-01

    In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.

  18. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.

    2012-01-01

    We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.

  19. Mixed Integer Programming and Heuristic Scheduling for Space Communication

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2013-01-01

    Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.

  20. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  1. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    SciTech Connect

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence provides an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.

  2. Final Report---Next-Generation Solvers for Mixed-Integer Nonlinear Programs: Structure, Search, and Implementation

    SciTech Connect

    Linderoth, Jeff T.; Luedtke, James R.

    2013-05-30

    The mathematical modeling of systems often requires the use of both nonlinear and discrete components. Problems involving both discrete and nonlinear components are known as mixed-integer nonlinear programs (MINLPs) and are among the most challenging computational optimization problems. This research project added to the understanding of this area by making a number of fundamental advances. First, the work demonstrated many novel, strong, tractable relaxations designed to deal with non-convexities arising in mathematical formulation. Second, the research implemented the ideas in software that is available to the public. Finally, the work demonstrated the importance of these ideas on practical applications and disseminated the work through scholarly journals, survey publications, and conference presentations.

  3. Estimating Tree-Structured Covariance Matrices via Mixed-Integer Programming.

    PubMed

    Bravo, Héctor Corrada; Wright, Stephen; Eng, Kevin H; Keles, Sündüz; Wahba, Grace

    2009-01-01

    We present a novel method for estimating tree-structured covariance matrices directly from observed continuous data. Specifically, we estimate a covariance matrix from observations of p continuous random variables encoding a stochastic process over a tree with p leaves. A representation of these classes of matrices as linear combinations of rank-one matrices indicating object partitions is used to formulate estimation as instances of well-studied numerical optimization problems.In particular, our estimates are based on projection, where the covariance estimate is the nearest tree-structured covariance matrix to an observed sample covariance matrix. The problem is posed as a linear or quadratic mixed-integer program (MIP) where a setting of the integer variables in the MIP specifies a set of tree topologies of the structured covariance matrix. We solve these problems to optimality using efficient and robust existing MIP solvers.We present a case study in phylogenetic analysis of gene expression and a simulation study comparing our method to distance-based tree estimating procedures.

  4. Designing cost-effective biopharmaceutical facilities using mixed-integer optimization.

    PubMed

    Liu, Songsong; Simaria, Ana S; Farid, Suzanne S; Papageorgiou, Lazaros G

    2013-01-01

    Chromatography operations are identified as critical steps in a monoclonal antibody (mAb) purification process and can represent a significant proportion of the purification material costs. This becomes even more critical with increasing product titers that result in higher mass loads onto chromatography columns, potentially causing capacity bottlenecks. In this work, a mixed-integer nonlinear programming (MINLP) model was created and applied to an industrially relevant case study to optimize the design of a facility by determining the most cost-effective chromatography equipment sizing strategies for the production of mAbs. Furthermore, the model was extended to evaluate the ability of a fixed facility to cope with higher product titers up to 15 g/L. Examination of the characteristics of the optimal chromatography sizing strategies across different titer values enabled the identification of the maximum titer that the facility could handle using a sequence of single column chromatography steps as well as multi-column steps. The critical titer levels for different ratios of upstream to dowstream trains where multiple parallel columns per step resulted in the removal of facility bottlenecks were identified. Different facility configurations in terms of number of upstream trains were considered and the trade-off between their cost and ability to handle higher titers was analyzed. The case study insights demonstrate that the proposed modeling approach, combining MINLP models with visualization tools, is a valuable decision-support tool for the design of cost-effective facility configurations and to aid facility fit decisions. 2013.

  5. Identification of regulatory structure and kinetic parameters of biochemical networks via mixed-integer dynamic optimization

    PubMed Central

    2013-01-01

    Background Recovering the network topology and associated kinetic parameter values from time-series data are central topics in systems biology. Nevertheless, methods that simultaneously do both are few and lack generality. Results Here, we present a rigorous approach for simultaneously estimating the parameters and regulatory topology of biochemical networks from time-series data. The parameter estimation task is formulated as a mixed-integer dynamic optimization problem with: (i) binary variables, used to model the existence of regulatory interactions and kinetic effects of metabolites in the network processes; and (ii) continuous variables, denoting metabolites concentrations and kinetic parameters values. The approach simultaneously optimizes the Akaike criterion, which captures the trade-off between complexity (measured by the number of parameters), and accuracy of the fitting. This simultaneous optimization mitigates a possible overfitting that could result from addition of spurious regulatory interactions. Conclusion The capabilities of our approach were tested in one benchmark problem. Our algorithm is able to identify a set of plausible network topologies with their associated parameters. PMID:24176044

  6. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    nature of the problem, auxiliary techniques including Lagrange multipliers, penalty functions, linearization, and rounding have all been used to aid in...result is a series of problems Pn with solutions Sn. If the sequence of problems is appropriately selected, two useful properties result. First...knowledge of 19 the solution to the (n)th problem aids in the solution of the (n+l)st problem. Second, the sequence of solutions Sn tends to the solution

  7. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution

  8. Mixed-integer programming methods for transportation and power generation problems

    NASA Astrophysics Data System (ADS)

    Damci Kurt, Pelin

    This dissertation conducts theoretical and computational research to solve challenging problems in application areas such as supply chain and power systems. The first part of the dissertation studies a transportation problem with market choice (TPMC) which is a variant of the classical transportation problem in which suppliers with limited capacities have a choice of which demands (markets) to satisfy. We show that TPMC is strongly NP-complete. We consider a version of the problem with a service level constraint on the maximum number of markets that can be rejected and show that if the original problem is polynomial, its cardinality-constrained version is also polynomial. We propose valid inequalities for mixed-integer cover and knapsack sets with variable upper bound constraints, which appear as substructures of TPMC and use them in a branch-and-cut algorithm to solve this problem. The second part of this dissertation studies a unit commitment (UC) problem in which the goal is to minimize the operational cost of power generators over a time period subject to physical constraints while satisfying demand. We provide several exponential classes of multi-period ramping and multi-period variable upper bound inequalities. We prove the strength of these inequalities and describe polynomial-time separation algorithms. Computational results show the effectiveness of the proposed inequalities when used as cuts in a branch-and-cut algorithm to solve the UC problem. The last part of this dissertation investigates the effects of uncertain wind power on the UC problem. A two-stage robust model and a three-stage stochastic program are compared.

  9. A solution procedure for mixed-integer nonlinear programming formulation of supply chain planning with quantity discounts under demand uncertainty

    NASA Astrophysics Data System (ADS)

    Yin, Sisi; Nishi, Tatsushi

    2014-11-01

    Quantity discount policy is decision-making for trade-off prices between suppliers and manufacturers while production is changeable due to demand fluctuations in a real market. In this paper, quantity discount models which consider selection of contract suppliers, production quantity and inventory simultaneously are addressed. The supply chain planning problem with quantity discounts under demand uncertainty is formulated as a mixed-integer nonlinear programming problem (MINLP) with integral terms. We apply an outer-approximation method to solve MINLP problems. In order to improve the efficiency of the proposed method, the problem is reformulated as a stochastic model replacing the integral terms by using a normalisation technique. We present numerical examples to demonstrate the efficiency of the proposed method.

  10. Flexible interval mixed-integer bi-infinite programming for environmental systems management under uncertainty.

    PubMed

    He, L; Huang, G H; Lu, H W

    2009-04-01

    A number of inexact programming methods have been developed for municipal solid waste management under uncertainty. However, most of them do not allow the parameters in the objective and constraints of a programming problem to be functional intervals (i.e., the lower and upper bounds of the intervals are functions of impact factors). In this study, a flexible interval mixed-integer bi-infinite programming (FIMIBIP) method is developed in response to the above concern. A case study is also conducted; the solutions are then compared with those obtained from interval mixed-integer bi-infinite programming (IMIBIP) and fuzzy interval mixed-integer programming (FIMIP) methods. It is indicated that the solutions through FIMIBIP can provide decision support for cost-effectively diverting municipal solid waste, and for sizing, timing and siting the facilities' expansion during the entire planning horizon. These schemes are more flexible than those identified through IMIBIP since the tolerance intervals are introduced to measure the level of constraints satisfaction. The FIMIBIP schemes may also be robust since the solutions are "globally-optimal" under all scenarios caused by the fluctuation of gas/energy prices, while the conventional ones are merely "locally-optimal" under a certain scenario.

  11. Comparison of penalty functions on a penalty approach to mixed-integer optimization

    NASA Astrophysics Data System (ADS)

    Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2016-06-01

    In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.

  12. Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

    DOE PAGES

    Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...

    2016-04-02

    We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.

  13. Greenhouse gas emissions control in integrated municipal solid waste management through mixed integer bilevel decision-making.

    PubMed

    He, Li; Huang, G H; Lu, Hongwei

    2011-10-15

    Recent studies indicated that municipal solid waste (MSW) is a major contributor to global warming due to extensive emissions of greenhouse gases (GHGs). However, most of them focused on investigating impacts of MSW on GHG emission amounts. This study presents two mixed integer bilevel decision-making models for integrated municipal solid waste management and GHG emissions control: MGU-MCL and MCU-MGL. The MGU-MCL model represents a top-down decision process, with the environmental sectors at the national level dominating the upper-level objective and the waste management sectors at the municipal level providing the lower-level objective. The MCU-MGL model implies a bottom-up decision process where municipality plays a leading role. Results from the models indicate that: the top-down decisions would reduce metric tonne carbon emissions (MTCEs) by about 59% yet increase about 8% of the total management cost; the bottom-up decisions would reduce MTCE emissions by about 13% but increase the total management cost very slightly; on-site monitoring and downscaled laboratory experiments are still required for reducing uncertainty in GHG emission rate from the landfill facility.

  14. Optimization of a wood dryer kiln using the mixed integer programming technique: A case study

    SciTech Connect

    Gustafsson, S.I.

    1999-07-01

    When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.

  15. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    SciTech Connect

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.

  16. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    DOE PAGES

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less

  17. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  18. Interval-parameter semi-infinite fuzzy-stochastic mixed-integer programming approach for environmental management under multiple uncertainties

    SciTech Connect

    Guo, P.; Huang, G.H.

    2010-03-15

    In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their

  19. BBPH: Using progressive hedging within branch and bound to solve multi-stage stochastic mixed integer programs

    SciTech Connect

    Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.

    2016-11-27

    Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.

  20. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  1. A novel mixed integer programming for multi-biomarker panel identification by distinguishing malignant from benign colorectal tumors.

    PubMed

    Zou, Meng; Zhang, Peng-Jun; Wen, Xin-Yu; Chen, Luonan; Tian, Ya-Ping; Wang, Yong

    2015-07-15

    Multi-biomarker panels can capture the nonlinear synergy among biomarkers and they are important to aid in the early diagnosis and ultimately battle complex diseases. However, identification of these multi-biomarker panels from case and control data is challenging. For example, the exhaustive search method is computationally infeasible when the data dimension is high. Here, we propose a novel method, MILP_k, to identify serum-based multi-biomarker panel to distinguish colorectal cancers (CRC) from benign colorectal tumors. Specifically, the multi-biomarker panel detection problem is modeled by a mixed integer programming to maximize the classification accuracy. Then we measured the serum profiling data for 101 CRC patients and 95 benign patients. The 61 biomarkers were analyzed individually and further their combinations by our method. We discovered 4 biomarkers as the optimal small multi-biomarker panel, including known CRC biomarkers CEA and IL-10 as well as novel biomarkers IMA and NSE. This multi-biomarker panel obtains leave-one-out cross-validation (LOOCV) accuracy to 0.7857 by nearest centroid classifier. An independent test of this panel by support vector machine (SVM) with threefold cross validation gets an AUC 0.8438. This greatly improves the predictive accuracy by 20% over the single best biomarker. Further extension of this 4-biomarker panel to a larger 13-biomarker panel improves the LOOCV to 0.8673 with independent AUC 0.8437. Comparison with the exhaustive search method shows that our method dramatically reduces the searching time by 1000-fold. Experiments on the early cancer stage samples reveal two panel of biomarkers and show promising accuracy. The proposed method allows us to select the subset of biomarkers with best accuracy to distinguish case and control samples given the number of selected biomarkers. Both receiver operating characteristic curve and precision-recall curve show our method's consistent performance gain in accuracy. Our method

  2. Multi-objective Mixed Integer Programming approach for facility layout design by considering closeness ratings, material handling, and re-layout cost

    NASA Astrophysics Data System (ADS)

    Purnomo, Muhammad Ridwan Andi; Satrio Wiwoho, Yoga

    2016-01-01

    Facility layout becomes one of production system factor that should be managed well, as it is designated for the location of production. In managing the layout, designing the layout by considering the optimal layout condition that supports the work condition is essential. One of the method for facility layout optimization is Mixed Integer Programming (MIP). In this study, the MIP is solved using Lingo 9.0 software and considering quantitative and qualitative objectives to be achieved simultaneously: minimizing material handling cost, maximizing closeness rating, and minimizing re-layout cost. The research took place in Rekayasa Wangdi as a make to order company, focusing on the making of concrete brick dough stirring machine with 10 departments involved. The result shows an improvement in the new layout for 333,72 points of objective value compared with the initial layout. As the conclusion, the proposed MIP is proven to be used to model facility layout problem under multi objective consideration for a more realistic look.

  3. An interval-parameter mixed integer multi-objective programming for environment-oriented evacuation management

    NASA Astrophysics Data System (ADS)

    Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.

    2010-05-01

    Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.

  4. Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price

    DTIC Science & Technology

    2006-01-01

    model is called the (deter- ministic) capacitated facility-location problem with sole-sourcing (FLP) ( Barcelo and Casanova [5]). Assume now that some...Appelgren, L.H.: A column generation algorithm for a ship scheduling problem. Transp. Sci. 3, 53–68 (1969) 5. Barcelo , J., Casanova, J.: A heuristic

  5. Distributed mixed-integer fuzzy hierarchical programming for municipal solid waste management. Part II: scheme analysis and mechanism revelation.

    PubMed

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Jiapei; Chen, Xiujuan; Li, Kailong

    2017-02-16

    As presented in the first companion paper, distributed mixed-integer fuzzy hierarchical programming (DMIFHP) was developed for municipal solid waste management (MSWM) under complexities of heterogeneities, hierarchy, discreteness, and interactions. Beijing was selected as a representative case. This paper focuses on presenting the obtained schemes and the revealed mechanisms of the Beijing MSWM system. The optimal MSWM schemes for Beijing under various solid waste treatment policies and their differences are deliberated. The impacts of facility expansion, hierarchy, and spatial heterogeneities and potential extensions of DMIFHP are also discussed. A few of findings are revealed from the results and a series of comparisons and analyses. For instance, DMIFHP is capable of robustly reflecting these complexities in MSWM systems, especially for Beijing. The optimal MSWM schemes are of fragmented patterns due to the dominant role of the proximity principle in allocating solid waste treatment resources, and they are closely related to regulated ratios of landfilling, incineration, and composting. Communities without significant differences among distances to different types of treatment facilities are more sensitive to these ratios than others. The complexities of hierarchy and heterogeneities pose significant impacts on MSWM practices. Spatial dislocation of MSW generation rates and facility capacities caused by unreasonable planning in the past may result in insufficient utilization of treatment capacities under substantial influences of transportation costs. The problems of unreasonable MSWM planning, e.g., severe imbalance among different technologies and complete vacancy of ten facilities, should be gained deliberation of the public and the municipal or local governments in Beijing. These findings are helpful for gaining insights into MSWM systems under these complexities, mitigating key challenges in the planning of these systems, improving the related management

  6. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2016-11-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  7. Non-Linear Control Allocation Using Piecewise Linear Functions

    DTIC Science & Technology

    2003-08-01

    A novel method is presented for the solution of the non- linear control allocation problem. Historically, control allocation has been performed by... linear control allocation problem to be cast as a piecewise linear program. The piecewise linear program is ultimately cast as a mixed-integer linear...piecewise linear control allocation method is shown to be markedly improved when compared to the performance of a more traditional control allocation approach that assumes linearity.

  8. Equivalent Linear Logistic Test Models.

    ERIC Educational Resources Information Center

    Bechger, Timo M.; Verstralen, Huub H. F. M.; Verhelst, Norma D.

    2002-01-01

    Discusses the Linear Logistic Test Model (LLTM) and demonstrates that there are many equivalent ways to specify a model. Analyzed a real data set (300 responses to 5 analogies) using a Lagrange multiplier test for the specification of the model, and demonstrated that there may be many ways to change the specification of an LLTM and achieve the…

  9. Parameterized Linear Longitudinal Airship Model

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  10. Puerto Rico water resources planning model program description

    USGS Publications Warehouse

    Moody, D.W.; Maddock, Thomas; Karlinger, M.R.; Lloyd, J.J.

    1973-01-01

    Because the use of the Mathematical Programming System -Extended (MPSX) to solve large linear and mixed integer programs requires the preparation of many input data cards, a matrix generator program to produce the MPSX input data from a much more limited set of data may expedite the use of the mixed integer programming optimization technique. The Model Definition and Control Program (MODCQP) is intended to assist a planner in preparing MPSX input data for the Puerto Rico Water Resources Planning Model. The model utilizes a mixed-integer mathematical program to identify a minimum present cost set of water resources projects (diversions, reservoirs, ground-water fields, desalinization plants, water treatment plants, and inter-basin transfers of water) which will meet a set of future water demands and to determine their sequence of construction. While MODCOP was specifically written to generate MPSX input data for the planning model described in this report, the program can be easily modified to reflect changes in the model's mathematical structure.

  11. Improving the Performance of a Mixed-Integer Production Scheduling Model for LKAB’s Iron Ore Mine, Kiruna, Sweden

    DTIC Science & Technology

    2006-05-01

    on a Sunblade 1000 computer with 1 GB RAM, while also conducting additional runs calculating lower bounds on a Beowulf Parallel Cluster with 96...problem instances, allowing us to reduce the optimality gap, as shown in third column of the table. We perform these lengthy runs on a Beowulf

  12. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of

  13. Graph-Switching Based Modeling of Mode Transition Constraints for Model Predictive Control of Hybrid Systems

    NASA Astrophysics Data System (ADS)

    Kobayashi, Koichi; Hiraishi, Kunihiko

    The model predictive/optimal control problem for hybrid systems is reduced to a mixed integer quadratic programming (MIQP) problem. However, the MIQP problem has one serious weakness, i.e., the computation time to solve the MIQP problem is too long for practical plants. For overcoming this technical issue, there are several approaches. In this paper, a modeling of mode transition constraints, which are expressed by a directed graph, is focused, and a new method to represent a directed graph is proposed. The effectiveness of the proposed method is shown by numerical examples on linear switched systems and piecewise linear systems.

  14. A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning.

    PubMed

    Wang, S; Huang, G H

    2013-03-15

    Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints.

  15. PILOT_PROTEIN: Identification of unmodified and modified proteins via high-resolution mass spectrometry and mixed-integer linear optimization

    PubMed Central

    Baliban, Richard C.; DiMaggio, Peter A.; Plazas-Mayorca, Mariana D.; Garcia, Benjamin A.; Floudas, Christodoulos A.

    2012-01-01

    A novel protein identification framework, PILOT_PROTEIN, has been developed to construct a comprehensive list of all unmodified proteins that are present in a living sample. It uses the peptide identification results from the PILOT_SEQUEL algorithm to initially determine all unmodified proteins within the sample. Using a rigorous biclustering approach that groups incorrect peptide sequences with other homologous sequences, the number of false positives reported is minimized. A sequence tag procedure is then incorporated along with the untargeted PTM identification algorithm, PILOT_PTM, to determine a list of all modification types and sites for each protein. The unmodified protein identification algorithm, PILOT_PROTEIN, is compared to the methods SEQUEST, InsPecT, X!Tandem, VEMS, and ProteinProspector using both prepared protein samples and a more complex chromatin digest. The algorithm demonstrates superior protein identification accuracy with a lower false positive rate. All materials are freely available to the scientific community at http://pumpd.princeton.edu. PMID:22788846

  16. Linear systems, and ARMA- and Fliess models

    NASA Astrophysics Data System (ADS)

    Lomadze, Vakhtang; Khurram Zafar, M.

    2010-10-01

    Linear (dynamical) systems are central objects of study (in linear system theory), and ARMA- and Fliess models are two very important classes of models that are used to represent them. This article is concerned with the question of what is a relation between them (in case of higher dimensions). It is shown that the category of linear systems, the 'weak' category of ARMA-models and the category of Fliess models are equivalent to each other.

  17. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  18. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  19. Investigating data envelopment analysis model with potential improvement for integer output values

    NASA Astrophysics Data System (ADS)

    Hussain, Mushtaq Taleb; Ramli, Razamin; Khalid, Ruzelan

    2015-12-01

    The decrement of input proportions in DEA model is associated with its input reduction. This reduction is apparently good for economy since it could reduce unnecessary cost resources. However, in some situations the reduction of relevant inputs such as labour could create social problems. Such inputs should thus be maintained or increased. This paper develops an advanced radial DEA model dealing with mixed integer linear programming to improve integer output values through the combination of inputs. The model can deal with real input values and integer output values. This model is valuable for situations dealing with input combination to improve integer output values as faced by most organizations.

  20. Composite Linear Models | Division of Cancer Prevention

    Cancer.gov

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty examples from the literature. |

  1. MILP model for resource disruption in parallel processor system

    NASA Astrophysics Data System (ADS)

    Nordin, Syarifah Zyurina; Caccetta, Louis

    2015-02-01

    In this paper, we consider the existence of disruption on unrelated parallel processor scheduling system. The disruption occurs due to a resource shortage where one of the parallel processors is facing breakdown problem during the task allocation, which give impact to the initial scheduling plan. Our objective is to reschedule the original unrelated parallel processor scheduling after the resource disruption that minimizes the makespan. A mixed integer linear programming model is presented for the recovery scheduling that considers the post-disruption policy. We conduct a computational experiment with different stopping time limit to see the performance of the model by using CPLEX 12.1 solver in AIMMS 3.10 software.

  2. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  3. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  4. Extended Generalized Linear Latent and Mixed Model

    ERIC Educational Resources Information Center

    Segawa, Eisuke; Emery, Sherry; Curry, Susan J.

    2008-01-01

    The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…

  5. Spaghetti Bridges: Modeling Linear Relationships

    ERIC Educational Resources Information Center

    Kroon, Cindy D.

    2016-01-01

    Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…

  6. Reasons for Hierarchical Linear Modeling: A Reminder.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1999-01-01

    Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)

  7. Aircraft engine mathematical model - linear system approach

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ

    2016-06-01

    This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.

  8. A Vernacular for Linear Latent Growth Models

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; Choi, Jaehwa

    2006-01-01

    In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…

  9. Semi-Parametric Generalized Linear Models.

    DTIC Science & Technology

    1985-08-01

    is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS

  10. Congeneric Models and Levine's Linear Equating Procedures.

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    In 1955, R. Levine introduced two linear equating procedures for the common-item non-equivalent populations design. His procedures make the same assumptions about true scores; they differ in terms of the nature of the equating function used. In this paper, two parameterizations of a classical congeneric model are introduced to model the variables…

  11. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  12. Space Surveillance Network Scheduling Under Uncertainty: Models and Benefits

    NASA Astrophysics Data System (ADS)

    Valicka, C.; Garcia, D.; Staid, A.; Watson, J.; Rintoul, M.; Hackebeil, G.; Ntaimo, L.

    2016-09-01

    Advances in space technologies continue to reduce the cost of placing satellites in orbit. With more entities operating space vehicles, the number of orbiting vehicles and debris has reached unprecedented levels and the number continues to grow. Sensor operators responsible for maintaining the space catalog and providing space situational awareness face an increasingly complex and demanding scheduling requirements. Despite these trends, a lack of advanced tools continues to prevent sensor planners and operators from fully utilizing space surveillance resources. One key challenge involves optimally selecting sensors from a network of varying capabilities for missions with differing requirements. Another open challenge, the primary focus of our work, is building robust schedules that effectively plan for uncertainties associated with weather, ad hoc collections, and other target uncertainties. Existing tools and techniques are not amenable to rigorous analysis of schedule optimality and do not adequately address the presented challenges. Building on prior research, we have developed stochastic mixed-integer linear optimization models to address uncertainty due to weather's effect on collection quality. By making use of the open source Pyomo optimization modeling software, we have posed and solved sensor network scheduling models addressing both forms of uncertainty. We present herein models that allow for concurrent scheduling of collections with the same sensor configuration and for proactively scheduling against uncertain ad hoc collections. The suitability of stochastic mixed-integer linear optimization for building sensor network schedules under different run-time constraints will be discussed.

  13. Are all Linear Paired Comparison Models Equivalent

    DTIC Science & Technology

    1990-09-01

    Previous authors (Jackson and Fleckenstein 1957, Mosteller 1958, Noether 1960) have found that different models of paired comparisons data lead to simi...ponential distribution with a location parameter (Mosteller 1958, Noether 1960). Formal statements describing the limiting behavior of the gamma...that are not convolu- tion type linear models (the uniform model considered by Smith (1956), Mosteller (1958), Noether (1960)) and other convolution

  14. Managing Clustered Data Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  15. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  16. Non-linear memristor switching model

    NASA Astrophysics Data System (ADS)

    Chernov, A. A.; Islamov, D. R.; Pik'nik, A. A.

    2016-10-01

    We introduce a thermodynamical model of filament growing when a current pulse via memristor flows. The model is the boundary value problem, which includes nonstationary heat conduction equation with non-linear Joule heat source, Poisson equation, and Shockley- Read-Hall equations taking into account strong electron-phonon interactions in trap ionization and charge transport processes. The charge current, which defines the heating in the model, depends on the rate of the oxygen vacancy generation. The latter depends on the local temperature. The solution of the introduced problem allows one to describe the kinetics of the switch process and the final filament morphology.

  17. [From clinical judgment to linear regression model.

    PubMed

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.

  18. User's manual for LINEAR, a FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.

    1987-01-01

    This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  19. Nonlinear damping and quasi-linear modelling.

    PubMed

    Elliott, S J; Ghandchi Tehrani, M; Langley, R S

    2015-09-28

    The mechanism of energy dissipation in mechanical systems is often nonlinear. Even though there may be other forms of nonlinearity in the dynamics, nonlinear damping is the dominant source of nonlinearity in a number of practical systems. The analysis of such systems is simplified by the fact that they show no jump or bifurcation behaviour, and indeed can often be well represented by an equivalent linear system, whose damping parameters depend on the form and amplitude of the excitation, in a 'quasi-linear' model. The diverse sources of nonlinear damping are first reviewed in this paper, before some example systems are analysed, initially for sinusoidal and then for random excitation. For simplicity, it is assumed that the system is stable and that the nonlinear damping force depends on the nth power of the velocity. For sinusoidal excitation, it is shown that the response is often also almost sinusoidal, and methods for calculating the amplitude are described based on the harmonic balance method, which is closely related to the describing function method used in control engineering. For random excitation, several methods of analysis are shown to be equivalent. In general, iterative methods need to be used to calculate the equivalent linear damper, since its value depends on the system's response, which itself depends on the value of the equivalent linear damper. The power dissipation of the equivalent linear damper, for both sinusoidal and random cases, matches that dissipated by the nonlinear damper, providing both a firm theoretical basis for this modelling approach and clear physical insight. Finally, practical examples of nonlinear damping are discussed: in microspeakers, vibration isolation, energy harvesting and the mechanical response of the cochlea.

  20. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  1. B-737 Linear Autoland Simulink Model

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste (Technical Monitor); Hogge, Edward F.

    2004-01-01

    The Linear Autoland Simulink model was created to be a modular test environment for testing of control system components in commercial aircraft. The input variables, physical laws, and referenced frames used are summarized. The state space theory underlying the model is surveyed and the location of the control actuators described. The equations used to realize the Dryden gust model to simulate winds and gusts are derived. A description of the pseudo-random number generation method used in the wind gust model is included. The longitudinal autopilot, lateral autopilot, automatic throttle autopilot, engine model and automatic trim devices are considered as subsystems. The experience in converting the Airlabs FORTRAN aircraft control system simulation to a graphical simulation tool (Matlab/Simulink) is described.

  2. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  3. User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.

    1988-01-01

    An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  4. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, J.; Collins, B.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  5. Wealth redistribution in conservative linear kinetic models

    NASA Astrophysics Data System (ADS)

    Toscani, G.

    2009-10-01

    We introduce and discuss kinetic models for wealth distribution which include both taxation and uniform redistribution. The evolution of the continuous density of wealth obeys a linear Boltzmann equation where the background density represents the action of an external subject on the taxation mechanism. The case in which the mean wealth is conserved is analyzed in full details, by recovering the analytical form of the steady states. These states are probability distributions of convergent random series of a special structure, called perpetuities. Among others, Gibbs distribution appears as steady state in case of total taxation and uniform redistribution.

  6. The Piecewise Linear Reactive Flow Rate Model

    SciTech Connect

    Vitello, P; Souers, P C

    2005-07-22

    Conclusions are: (1) Early calibrations of the Piece Wise Linear reactive flow model have shown that it allows for very accurate agreement with data for a broad range of detonation wave strengths. (2) The ability to vary the rate at specific pressures has shown that corner turning involves competition between the strong wave that travels roughly in a straight line and growth at low pressure of a new wave that turns corners sharply. (3) The inclusion of a low pressure de-sensitization rate is essential to preserving the dead zone at large times as is observed.

  7. The Piece Wise Linear Reactive Flow Model

    SciTech Connect

    Vitello, P; Souers, P C

    2005-08-18

    For non-ideal explosives a wide range of behavior is observed in experiments dealing with differing sizes and geometries. A predictive detonation model must be able to reproduce many phenomena including such effects as: variations in the detonation velocity with the radial diameter of rate sticks; slowing of the detonation velocity around gentle corners; production of dead zones for abrupt corner turning; failure of small diameter rate sticks; and failure for rate sticks with sufficiently wide cracks. Most models have been developed to explain one effect at a time. Often, changes are made in the input parameters used to fit each succeeding case with the implication that this is sufficient for the model to be valid over differing regimes. We feel that it is important to develop a model that is able to fit experiments with one set of parameters. To address this we are creating a new generation of models that are able to produce better fitting to individual data sets than prior models and to simultaneous fit distinctly different regimes of experiments. Presented here are details of our new Piece Wise Linear reactive flow model applied to LX-17.

  8. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  9. Ira Remsen, saccharin, and the linear model.

    PubMed

    Warner, Deborah J

    2008-03-01

    While working in the chemistry laboratory at Johns Hopkins University, Constantin Fahlberg oxidized the 'ortho-sulfamide of benzoic acid' and, by chance, found the result to be incredibly sweet. Several years later, now working on his own, he termed this stuff saccharin, developed methods of making it in quantity, obtained patents on these methods, and went into production. As the industrial and scientific value of saccharin became apparent, Ira Remsen pointed out that the initial work had been done in his laboratory and at his suggestion. The ensuing argument, carried out in the courts of law and public opinion, illustrates the importance of the linear model to scientists who staked their identities on the model of disinterested research but who also craved credit for important practical results.

  10. Extension of the hybrid linear programming method to optimize simultaneously the design and operation of groundwater utilization systems

    NASA Astrophysics Data System (ADS)

    Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed

    2015-04-01

    This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for

  11. Modeling patterns in data using linear and related models

    SciTech Connect

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.

  12. Numerical linearized MHD model of flapping oscillations

    NASA Astrophysics Data System (ADS)

    Korovinskiy, D. B.; Ivanov, I. B.; Semenov, V. S.; Erkaev, N. V.; Kiehas, S. A.

    2016-06-01

    Kink-like magnetotail flapping oscillations in a Harris-like current sheet with earthward growing normal magnetic field component Bz are studied by means of time-dependent 2D linearized MHD numerical simulations. The dispersion relation and two-dimensional eigenfunctions are obtained. The results are compared with analytical estimates of the double-gradient model, which are found to be reliable for configurations with small Bz up to values ˜ 0.05 of the lobe magnetic field. Coupled with previous results, present simulations confirm that the earthward/tailward growth direction of the Bz component acts as a switch between stable/unstable regimes of the flapping mode, while the mode dispersion curve is the same in both cases. It is confirmed that flapping oscillations may be triggered by a simple Gaussian initial perturbation of the Vz velocity.

  13. Linear programming models for cost reimbursement.

    PubMed Central

    Diehr, G; Tamura, H

    1989-01-01

    Tamura, Lauer, and Sanborn (1985) reported a multiple regression approach to the problem of determining a cost reimbursement (rate-setting) formula for facilities providing long-term care (nursing homes). In this article we propose an alternative approach to this problem, using an absolute-error criterion instead of the least-squares criterion used in regression, with a variety of side constraints incorporated in the derivation of the formula. The mathematical tool for implementation of this approach is linear programming (LP). The article begins with a discussion of the desirable characteristics of a rate-setting formula. The development of a formula with these properties can be easily achieved, in terms of modeling as well as computation, using LP. Specifically, LP provides an efficient computational algorithm to minimize absolute error deviation, thus protecting rates from the effects of unusual observations in the data base. LP also offers modeling flexibility to impose a variety of policy controls. These features are not readily available if a least-squares criterion is used. Examples based on actual data are used to illustrate alternative LP models for rate setting. PMID:2759871

  14. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  15. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  16. From linear to generalized linear mixed models: A case study in repeated measures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  17. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction

  18. The effect of non-linear human visual system components on linear model observers

    NASA Astrophysics Data System (ADS)

    Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.

    2004-05-01

    Linear model observers have been used successfully to predict human performance in clinically relevant visual tasks for a variety of backgrounds. On the other hand, there has been another family of models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks). These masking models usually include a number of non-linear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The relationship between these two traditions of models has not been extensively investigated in the context of detection in noise. In this paper, we evaluated the effect of including some of these non-linear components into a linear channelized Hotelling observer (CHO), and the associated practical implications for medical image quality evaluation. In particular, we evaluate whether the rank order evaluation of two compression algorithms (JPEG vs. JPEG 2000) is changed by inclusion of the non-linear components. The results show: a) First that the simpler linear CHO model observer outperforms CHO model with the nonlinear components investigated. b) The rank order of model observer performance for the compression algorithms did not vary when the non-linear components were included. For the present task, the results suggest that the addition of the physiologically based channel non-linearities to a channelized Hotelling might add complexity to the model observers without great impact on medical image quality evaluation.

  19. Permutation inference for the general linear model

    PubMed Central

    Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.

    2014-01-01

    Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839

  20. Linearized Functional Minimization for Inverse Modeling

    SciTech Connect

    Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco

    2012-06-21

    Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.

  1. Approximately Integrable Linear Statistical Models in Non-Parametric Estimation

    DTIC Science & Technology

    1990-08-01

    OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the

  2. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    PubMed

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  3. Recent Updates to the GEOS-5 Linear Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  4. Linear control theory for gene network modeling.

    PubMed

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  5. Valuation of financial models with non-linear state spaces

    NASA Astrophysics Data System (ADS)

    Webber, Nick

    2001-02-01

    A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.

  6. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    ERIC Educational Resources Information Center

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  7. Tried and True: Springing into Linear Models

    ERIC Educational Resources Information Center

    Darling, Gerald

    2012-01-01

    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  8. Three-Dimensional Modeling in Linear Regression.

    ERIC Educational Resources Information Center

    Herman, James D.

    Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…

  9. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    SciTech Connect

    Stoll, Brady; Brinkman, Gregory; Townsend, Aaron; Bloom, Aaron

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0

  10. Estimation of non-linear growth models by linearization: a simulation study using a Gompertz function.

    PubMed

    Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A

    2006-01-01

    A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.

  11. A model of large-scale instabilities in the Jovian troposphere. I - Linear model. II - Quasi-linear model

    NASA Astrophysics Data System (ADS)

    Orsolini, Y.; Leovy, C. B.

    1993-12-01

    A quasi-geostrophic midlatitude beta-plane linear model is here used to study whether the decay with height and meridional circulations of near-steady jets in the tropospheric circulation of Jupiter arise as a means of stabilizing a deep zonal flow that extends into the upper troposphere. The model results obtained are analogous to the stabilizing effect of meridional shear on baroclinic instabilities. In the second part of this work, a quasi-linear model is used to investigate how an initially barotropically unstable flow develops a quasi-steady shear zone in the lower scale heights of the model domain, due to the action of the eddy fluxes.

  12. Development of a Linear Stirling Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  13. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  14. Descriptive Linear modeling of steady-state visual evoked response

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.; Kenner, K.

    1986-01-01

    A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.

  15. A Model for Quadratic Outliers in Linear Regression.

    ERIC Educational Resources Information Center

    Elashoff, Janet Dixon; Elashoff, Robert M.

    This paper introduces a model for describing outliers (observations which are extreme in some sense or violate the apparent pattern of other observations) in linear regression which can be viewed as a mixture of a quadratic and a linear regression. The maximum likelihood estimators of the parameters in the model are derived and their asymptotic…

  16. Applications of the Linear Logistic Test Model in Psychometric Research

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.

    2009-01-01

    The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…

  17. Neural network models for Linear Programming

    SciTech Connect

    Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )

    1989-01-01

    The purpose of this paper is to present a neural network that solves the general Linear Programming (LP) problem. In the first part, we recall Hopfield and Tank's circuit for LP and show that although it converges to stable states, it does not, in general, yield admissible solutions. This is due to the penalization treatment of the constraints. In the second part, we propose an approach based on Lagragrange multipliers that converges to primal and dual admissible solutions. We also show that the duality gap (measuring the optimality) can be rendered, in principle, as small as needed. 11 refs.

  18. Employment of CB models for non-linear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.

    1990-01-01

    The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.

  19. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  20. Energy-efficient container handling using hybrid model predictive control

    NASA Astrophysics Data System (ADS)

    Xin, Jianbin; Negenborn, Rudy R.; Lodewijks, Gabriel

    2015-11-01

    The performance of container terminals needs to be improved to adapt the growth of containers while maintaining sustainability. This paper provides a methodology for determining the trajectory of three key interacting machines for carrying out the so-called bay handling task, involving transporting containers between a vessel and the stacking area in an automated container terminal. The behaviours of the interacting machines are modelled as a collection of interconnected hybrid systems. Hybrid model predictive control (MPC) is proposed to achieve optimal performance, balancing the handling capacity and energy consumption. The underlying control problem is hereby formulated as a mixed-integer linear programming problem. Simulation studies illustrate that a higher penalty on energy consumption indeed leads to improved sustainability using less energy. Moreover, simulations illustrate how the proposed energy-efficient hybrid MPC controller performs under different types of uncertainties.

  1. Non-linear transformer modeling and simulation

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-08-01

    Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.

  2. A general non-linear multilevel structural equation mixture model

    PubMed Central

    Kelava, Augustin; Brandt, Holger

    2014-01-01

    In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022

  3. A mathematical model for municipal solid waste management - A case study in Hong Kong.

    PubMed

    Lee, C K M; Yeung, C L; Xiong, Z R; Chung, S H

    2016-12-01

    With the booming economy and increasing population, the accumulation of waste has become an increasingly arduous issue and has aroused the attention from all sectors of society. Hong Kong which has a relative high daily per capita domestic waste generation rate in Asia has not yet established a comprehensive waste management system. This paper conducts a review of waste management approaches and models. Researchers highlight that mathematical models provide useful information for decision-makers to select appropriate choices and save cost. It is suggested to consider municipal solid waste management in a holistic view and improve the utilization of waste management infrastructures. A mathematical model which adopts integer linear programming and mixed integer programming has been developed for Hong Kong municipal solid waste management. A sensitivity analysis was carried out to simulate different scenarios which provide decision-makers important information for establishing Hong Kong waste management system.

  4. Derivation and definition of a linear aircraft model

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1988-01-01

    A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.

  5. Linear and Nonlinear Thinking: A Multidimensional Model and Measure

    ERIC Educational Resources Information Center

    Groves, Kevin S.; Vance, Charles M.

    2015-01-01

    Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…

  6. A hybrid approach to modeling and control of vehicle height for electronically controlled air suspension

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long

    2016-01-01

    The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.

  7. Modeling of linear viscoelastic space structures

    NASA Astrophysics Data System (ADS)

    McTavish, D. J.; Hughes, P. C.

    1993-01-01

    The GHM Method provides viscoelastic finite elements derived from the commonly used elastic finite elements. Moreover, these GHM elements are used directly and conveniently in second-order structural models just like their elastic counterparts. The forms of the GHM element matrices preserve the definiteness properties usually associated with finite element matrices (the mass matrix is positive definite, the stiffness matrix is nonnegative definite, and the damping matrix is positive semidefinite). In the Laplace domain, material properties are modeled phenomenologically as a sum of second-order rational functions dubbed 'minioscillator' terms. Developed originally as a tool for the analysis of damping in large flexible space structures, the GHM method is applicable to any structure which incorporates viscoelastic materials.

  8. Linear functional minimization for inverse modeling

    SciTech Connect

    Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.

    2015-06-01

    In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.

  9. A linear algebra model for quasispecies

    NASA Astrophysics Data System (ADS)

    García-Pelayo, Ricardo

    2002-06-01

    In the present work we present a simple model of the population genetics of quasispecies. We show that the error catastrophe arises because in Biology the mutation rates are almost zero and the mutations themselves are almost neutral. We obtain and discuss previously known results from the point of view of this model. New results are: the fitness of a sequence in terms of its abundance in the quasispecies, a formula for the stable distribution of a quasispecies in which the fitness depends only on the Hamming distance to the master sequence, the time it takes the master sequence to generate a stable quasispecies (such as in the infection by a virus) and the fitness of quasispecies.

  10. Failure of Tube Models to Predict the Linear Rheology of Star/Linear Blends

    NASA Astrophysics Data System (ADS)

    Hall, Ryan; Desai, Priyanka; Kang, Beomgoo; Katzarova, Maria; Huang, Qifan; Lee, Sanghoon; Chang, Taihyun; Venerus, David; Mays, Jimmy; Schieber, Jay; Larson, Ronald

    We compare predictions of two of the most advanced versions of the tube model, namely the Hierarchical model by Wang et al. (J. Rheol. 54:223, 2010) and the BOB (branch-on-branch) model by Das et al. (J. Rheol. 50:207-234, 2006), against linear viscoelastic data on blends of monodisperse star and monodisperse linear polybutadiene polymers. The star was carefully synthesized/characterized by temperature gradient interaction chromatography, and rheological data in the high frequency region were obtained through time-temperature superposition. We found massive failures of both the Hierarchical and BOB models to predict the terminal relaxation behavior of the star/linear blends, despite their success in predicting the rheology of the pure star and pure linear. This failure occurred regardless of the choices made concerning constraint release, such as assuming arm retraction in fat or skinny tubes, or allowing for disentanglement relaxation to cut off the constraint release Rouse process at long times. The failures call into question whether constraint release can be described as a combination of constraint release Rouse processes and dynamic tube dilation within a canonical tube model of entanglement interactions.

  11. Bond models in linear and nonlinear optics

    NASA Astrophysics Data System (ADS)

    Aspnes, D. E.

    2015-08-01

    Bond models, also known as polarizable-point or mechanical models, have a long history in optics, starting with the Clausius-Mossotti relation but more accurately originating with Ewald's largely forgotten work in 1912. These models describe macroscopic phenomena such as dielectric functions and nonlinear-optical (NLO) susceptibilities in terms of the physics that takes place in real space, in real time, on the atomic scale. Their strengths lie in the insights that they provide and the questions that they raise, aspects that are often obscured by quantum-mechanical treatments. Statics versions were used extensively in the late 1960's and early 1970's to correlate NLO susceptibilities among bulk materials. Interest in NLO applications revived with the 2002 work of Powell et al., who showed that a fully anisotropic version reduced by more than a factor of 2 the relatively large number of parameters necessary to describe secondharmonic- generation (SHG) data for Si(111)/SiO2 interfaces. Attention now is focused on the exact physical meaning of these parameters, and to the extent that they represent actual physical quantities.

  12. An insight into linear quarter car model accuracy

    NASA Astrophysics Data System (ADS)

    Maher, Damien; Young, Paul

    2011-03-01

    The linear quarter car model is the most widely used suspension system model. A number of authors expressed doubts about the accuracy of the linear quarter car model in predicting the movement of a complex nonlinear suspension system. In this investigation, a quarter car rig, designed to mimic the popular MacPherson strut suspension system, is subject to narrowband excitation at a range of frequencies using a motor driven cam. Linear and nonlinear quarter car simulations of the rig are developed. Both isolated and operational testing techniques are used to characterise the individual suspension system components. Simulations carried out using the linear and nonlinear models are compared to measured data from the suspension test rig at selected excitation frequencies. Results show that the linear quarter car model provides a reasonable approximation of unsprung mass acceleration but significantly overpredicts sprung mass acceleration magnitude. The nonlinear simulation, featuring a trilinear shock absorber model and nonlinear tyre, produces results which are significantly more accurate than linear simulation results. The effect of tyre damping on the nonlinear model is also investigated for narrowband excitation. It is found to reduce the magnitude of unsprung mass acceleration peaks and contribute to an overall improvement in simulation accuracy.

  13. Hierarchical Linear Modeling in Salary-Equity Studies.

    ERIC Educational Resources Information Center

    Loeb, Jane W.

    2003-01-01

    Provides information on how hierarchical linear modeling can be used as an alternative to multiple regression analysis for conducting salary-equity studies. Salary data are used to compare and contrast the two approaches. (EV)

  14. Dilatonic non-linear sigma models and Ricci flow extensions

    NASA Astrophysics Data System (ADS)

    Carfora, M.; Marzuoli, A.

    2016-09-01

    We review our recent work describing, in terms of the Wasserstein geometry over the space of probability measures, the embedding of the Ricci flow in the renormalization group flow for dilatonic non-linear sigma models.

  15. Linear functional minimization for inverse modeling

    DOE PAGES

    Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...

    2015-06-01

    In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less

  16. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  17. Error control of iterative linear solvers for integrated groundwater models.

    PubMed

    Dixon, Matthew F; Bai, Zhaojun; Brush, Charles F; Chung, Francis I; Dogrul, Emin C; Kadir, Tariq N

    2011-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models, which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of "forward error bound estimation" to explain the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed by the US Geological Survey and the California State Department of Water Resources, we observe that this error bound guides the choice of a practical measure for controlling the error in linear systems. We implemented a preconditioned GMRES algorithm and benchmarked it against the Successive Over-Relaxation (SOR) method, the most widely known iterative solver for nonsymmetric coefficient matrices. With forward error control, GMRES can easily replace the SOR method in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7.74×. This research is expected to broadly impact groundwater modelers through the demonstration of a practical and general approach for setting the residual tolerance in line with the solution error tolerance and presentation of GMRES performance benchmarking results.

  18. Modeling Compton Scattering in the Linear Regime

    NASA Astrophysics Data System (ADS)

    Kelmar, Rebeka

    2016-09-01

    Compton scattering is the collision of photons and electrons. This collision causes the photons to be scattered with increased energy and therefore can produce high-energy photons. These high-energy photons can be used in many other fields including phase contrast medical imaging and x-ray structure determination. Compton scattering is currently well understood for low-energy collisions; however, in order to accurately compute spectra of backscattered photons at higher energies relativistic considerations must be included in the calculations. The focus of this work is to adapt a current program for calculating Compton backscattered radiation spectra to improve its efficiency. This was done by first translating the program from Matlab to python. The next step was to implement a more efficient adaptive integration to replace the trapezoidal method. A new program was produced that operates at less than a half of the speed of the original. This is important because it allows for quicker analysis, and sets the stage for further optimization. The programs were developed using just one particle, while in reality there are thousands of particles involved in these collisions. This means that a more efficient program is essential to running these simulations. The development of this new and efficient program will lead to accurate modeling of Compton sources as well as their improved performance.

  19. Generation of linear dynamic models from a digital nonlinear simulation

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.

    1979-01-01

    The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.

  20. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2016-09-07

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  1. Computer modeling of batteries from non-linear circuit elements

    NASA Technical Reports Server (NTRS)

    Waaben, S.; Federico, J.; Moskowitz, I.

    1983-01-01

    A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.

  2. Confirming the Lanchestrian linear-logarithmic model of attrition

    SciTech Connect

    Hartley, D.S. III.

    1990-12-01

    This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.

  3. Non-Linear Finite Element Modeling of THUNDER Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Taleghani, Barmac K.; Campbell, Joel F.

    1999-01-01

    A NASTRAN non-linear finite element model has been developed for predicting the dome heights of THUNDER (THin Layer UNimorph Ferroelectric DrivER) piezoelectric actuators. To analytically validate the finite element model, a comparison was made with a non-linear plate solution using Von Karmen's approximation. A 500 volt input was used to examine the actuator deformation. The NASTRAN finite element model was also compared with experimental results. Four groups of specimens were fabricated and tested. Four different input voltages, which included 120, 160, 200, and 240 Vp-p with a 0 volts offset, were used for this comparison.

  4. Dynamic modeling of electrochemical systems using linear graph theory

    NASA Astrophysics Data System (ADS)

    Dao, Thanh-Son; McPhee, John

    An electrochemical cell is a multidisciplinary system which involves complex chemical, electrical, and thermodynamical processes. The primary objective of this paper is to develop a linear graph-theoretical modeling for the dynamic description of electrochemical systems through the representation of the system topologies. After a brief introduction to the topic and a review of linear graphs, an approach to develop linear graphs for electrochemical systems using a circuitry representation is discussed, followed in turn by the use of the branch and chord transformation techniques to generate final dynamic equations governing the system. As an example, the application of linear graph theory to modeling a nickel metal hydride (NiMH) battery will be presented. Results show that not only the number of equations are reduced significantly, but also the linear graph model simulates faster compared to the original lumped parameter model. The approach presented in this paper can be extended to modeling complex systems such as an electric or hybrid electric vehicle where a battery pack is interconnected with other components in many different domains.

  5. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  6. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  7. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  8. Application Scenarios for Nonstandard Log-Linear Models

    ERIC Educational Resources Information Center

    Mair, Patrick; von Eye, Alexander

    2007-01-01

    In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard…

  9. Heuristic and Linear Models of Judgment: Matching Rules and Environments

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Karelaia, Natalia

    2007-01-01

    Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to…

  10. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  11. Johnson-Neyman Type Technique in Hierarchical Linear Model.

    ERIC Educational Resources Information Center

    Miyazaki, Yasuo

    One of the innovative approaches in the use of hierarchical linear models (HLM) is to use HLM for Slopes as Outcomes models. This implies that the researcher considers that the regression slopes vary from cluster to cluster randomly as well as systematically with certain covariates at the cluster level. Among the covariates, group indicator…

  12. Use of a linearization approximation facilitating stochastic model building.

    PubMed

    Svensson, Elin M; Karlsson, Mats O

    2014-04-01

    The objective of this work was to facilitate the development of nonlinear mixed effects models by establishing a diagnostic method for evaluation of stochastic model components. The random effects investigated were between subject, between occasion and residual variability. The method was based on a first-order conditional estimates linear approximation and evaluated on three real datasets with previously developed population pharmacokinetic models. The results were assessed based on the agreement in difference in objective function value between a basic model and extended models for the standard nonlinear and linearized approach respectively. The linearization was found to accurately identify significant extensions of the model's stochastic components with notably decreased runtimes as compared to the standard nonlinear analysis. The observed gain in runtimes varied between four to more than 50-fold and the largest gains were seen for models with originally long runtimes. This method may be especially useful as a screening tool to detect correlations between random effects since it substantially quickens the estimation of large variance-covariance blocks. To expedite the application of this diagnostic tool, the linearization procedure has been automated and implemented in the software package PsN.

  13. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  14. Generalized linear mixed models for meta-analysis.

    PubMed

    Platt, R W; Leroux, B G; Breslow, N

    1999-03-30

    We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.

  15. PID controller design for trailer suspension based on linear model

    NASA Astrophysics Data System (ADS)

    Kushairi, S.; Omar, A. R.; Schmidt, R.; Isa, A. A. Mat; Hudha, K.; Azizan, M. A.

    2015-05-01

    A quarter of an active trailer suspension system having the characteristics of a double wishbone type was modeled as a complex multi-body dynamic system in MSC.ADAMS. Due to the complexity of the model, a linearized version is considered in this paper. A model reduction technique is applied to the linear model, resulting in a reduced-order model. Based on this simplified model, a Proportional-Integral-Derivative (PID) controller was designed in MATLAB/Simulink environment; primarily to reduce excessive roll motions and thus improving the ride comfort. Simulation results show that the output signal closely imitates the input signal in multiple cases - demonstrating the effectiveness of the controller.

  16. A position-aware linear solid constitutive model for peridynamics

    SciTech Connect

    Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.

    2015-11-06

    A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations on simple benchmark problems show a sharp reduction in error relative to the LPS model.

  17. A position-aware linear solid constitutive model for peridynamics

    DOE PAGES

    Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.

    2015-11-06

    A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less

  18. Functional Linear Models for Association Analysis of Quantitative Traits

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao

    2014-01-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  19. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.

  20. A non linear analytical model of switched reluctance machines

    NASA Astrophysics Data System (ADS)

    Sofiane, Y.; Tounzi, A.; Piriou, F.

    2002-06-01

    Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.

  1. Piecewise linear and Boolean models of chemical reaction networks

    PubMed Central

    Veliz-Cuba, Alan; Kumar, Ajit; Josić, Krešimir

    2014-01-01

    Models of biochemical networks are frequently complex and high-dimensional. Reduction methods that preserve important dynamical properties are therefore essential for their study. Interactions in biochemical networks are frequently modeled using Hill functions (xn/(Jn + xn)). Reduced ODEs and Boolean approximations of such model networks have been studied extensively when the exponent n is large. However, while the case of small constant J appears in practice, it is not well understood. We provide a mathematical analysis of this limit, and show that a reduction to a set of piecewise linear ODEs and Boolean networks can be mathematically justified. The piecewise linear systems have closed form solutions that closely track those of the fully nonlinear model. The simpler, Boolean network can be used to study the qualitative behavior of the original system. We justify the reduction using geometric singular perturbation theory and compact convergence, and illustrate the results in network models of a toggle switch and an oscillator. PMID:25412739

  2. Multikernel linear mixed models for complex phenotype prediction

    PubMed Central

    Weissbrod, Omer; Geiger, Dan; Rosset, Saharon

    2016-01-01

    Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636

  3. A Methodology and Linear Model for System Planning and Evaluation.

    ERIC Educational Resources Information Center

    Meyer, Richard W.

    1982-01-01

    The two-phase effort at Clemson University to design a comprehensive library automation program is reported. Phase one was based on a version of IBM's business system planning methodology, and the second was based on a linear model designed to compare existing program systems to the phase one design. (MLW)

  4. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  5. Asymptotic behavior of coupled linear systems modeling suspension bridges

    NASA Astrophysics Data System (ADS)

    Dell'Oro, Filippo; Giorgi, Claudio; Pata, Vittorino

    2015-06-01

    We consider the coupled linear system describing the vibrations of a string-beam system related to the well-known Lazer-McKenna suspension bridge model. For ɛ > 0 and k > 0, the decay properties of the solution semigroup are discussed in dependence of the nonnegative parameters γ and h, which are responsible for the damping effects.

  6. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  7. Johnson-Neyman Type Technique in Hierarchical Linear Models

    ERIC Educational Resources Information Center

    Miyazaki, Yasuo; Maier, Kimberly S.

    2005-01-01

    In hierarchical linear models we often find that group indicator variables at the cluster level are significant predictors for the regression slopes. When this is the case, the average relationship between the outcome and a key independent variable are different from group to group. In these settings, a question such as "what range of the…

  8. Identifiability Results for Several Classes of Linear Compartment Models.

    PubMed

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  9. Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.

    PubMed

    Kumar, Gaurav; Bajaj, Rakesh Kumar

    2014-01-01

    In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.

  10. Linear Time Invariant Models for Integrated Flight and Rotor Control

    NASA Astrophysics Data System (ADS)

    Olcer, Fahri Ersel

    2011-12-01

    Recent developments on individual blade control (IBC) and physics based reduced order models of various on-blade control (OBC) actuation concepts are opening up opportunities to explore innovative rotor control strategies for improved rotor aerodynamic performance, reduced vibration and BVI noise, and improved rotor stability, etc. Further, recent developments in computationally efficient algorithms for the extraction of Linear Time Invariant (LTI) models are providing a convenient framework for exploring integrated flight and rotor control, while accounting for the important couplings that exist between body and low frequency rotor response and high frequency rotor response. Formulation of linear time invariant (LTI) models of a nonlinear system about a periodic equilibrium using the harmonic domain representation of LTI model states has been studied in the literature. This thesis presents an alternative method and a computationally efficient scheme for implementation of the developed method for extraction of linear time invariant (LTI) models from a helicopter nonlinear model in forward flight. The fidelity of the extracted LTI models is evaluated using response comparisons between the extracted LTI models and the nonlinear model in both time and frequency domains. Moreover, the fidelity of stability properties is studied through the eigenvalue and eigenvector comparisons between LTI and LTP models by making use of the Floquet Transition Matrix. For time domain evaluations, individual blade control (IBC) and On-Blade Control (OBC) inputs that have been tried in the literature for vibration and noise control studies are used. For frequency domain evaluations, frequency sweep inputs are used to obtain frequency responses of fixed system hub loads to a single blade IBC input. The evaluation results demonstrate the fidelity of the extracted LTI models, and thus, establish the validity of the LTI model extraction process for use in integrated flight and rotor control

  11. Linearized flexibility models in multibody dynamics and control

    NASA Technical Reports Server (NTRS)

    Cimino, William W.

    1989-01-01

    Simulation of structural response of multi-flexible-body systems by linearized flexible motion combined with nonlinear rigid motion is discussed. Advantages and applicability of such an approach for accurate simulation with greatly reduced computational costs and turnaround times are described, restricting attention to the control design environment. Requirements for updating the linearized flexibility model to track large angular motions are discussed. Validation of such an approach by comparison with other existing codes is included. Application to a flexible robot manipulator system is described.

  12. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  13. Switched linear model predictive controllers for periodic exogenous signals

    NASA Astrophysics Data System (ADS)

    Wang, Liuping; Gawthrop, Peter; Owens, David. H.; Rogers, Eric

    2010-04-01

    This article develops switched linear controllers for periodic exogenous signals using the framework of a continuous-time model predictive control. In this framework, the control signal is generated by an algorithm that uses receding horizon control principle with an on-line optimisation scheme that permits inclusion of operational constraints. Unlike traditional repetitive controllers, applying this method in the form of switched linear controllers ensures bumpless transfer from one controller to another. Simulation studies are included to demonstrate the efficacy of the design with or without hard constraints.

  14. Solving linear integer programming problems by a novel neural model.

    PubMed

    Cavalieri, S

    1999-02-01

    The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.

  15. Scalar mesons in three-flavor linear sigma models

    SciTech Connect

    Deirdre Black; Amir H. Fariborz; Sherif Moussa; Salah Nasri; Joseph Schrechter

    2001-09-01

    The three flavor linear sigma model is studied in order to understand the role of possible light scalar mesons in the pi-pi, pi-K and pi-eta elastic scattering channels. The K-matrix prescription is used to unitarize tree-level amplitudes and, with a sufficiently general model, we obtain reasonable ts to the experimental data. The effect of unitarization is very important and leads to the emergence of a nonet of light scalars, with masses below 1 GeV. We compare with a scattering treatment using a more general non-linear sigma model approach and also comment upon how our results t in with the scalar meson puzzle. The latter involves a preliminary investigation of possible mixing between scalar nonets.

  16. Modeling pan evaporation for Kuwait by multiple linear regression.

    PubMed

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values.

  17. Modeling of thermal storage systems in MILP distributed energy resource models

    SciTech Connect

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; Groissböck, Markus; DeForest, Nicholas; Marnay, Chris

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculations are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.

  18. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  19. Mining Knowledge from Multiple Criteria Linear Programming Models

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhu, Xingquan; Li, Aihua; Zhang, Lingling; Shi, Yong

    As a promising data mining tool, Multiple Criteria Linear Programming (MCLP) has been widely used in business intelligence. However, a possible limitation of MCLP is that it generates unexplainable black-box models which can only tell us results without reasons. To overcome this shortage, in this paper, we propose a Knowledge Mining strategy which mines from black-box MCLP models to get explainable and understandable knowledge. Different from the traditional Data Mining strategy which focuses on mining knowledge from data, this Knowledge Mining strategy provides a new vision of mining knowledge from black-box models, which can be taken as a special topic of “Intelligent Knowledge Management”.

  20. Disorder and Quantum Chromodynamics -- Non-Linear σ Models

    NASA Astrophysics Data System (ADS)

    Guhr, Thomas; Wilke, Thomas

    2001-10-01

    The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.

  1. Disorder and Quantum Chromodynamics - Non-Linear σ Models

    NASA Astrophysics Data System (ADS)

    Guhr, Thomas; Wilke, Thomas

    The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.

  2. Residuals analysis of the generalized linear models for longitudinal data.

    PubMed

    Chang, Y C

    2000-05-30

    The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.

  3. MAGDM linear-programming models with distinct uncertain preference structures.

    PubMed

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  4. Modelling hillslope evolution: linear and nonlinear transport relations

    NASA Astrophysics Data System (ADS)

    Martin, Yvonne

    2000-08-01

    Many recent models of landscape evolution have used a diffusion relation to simulate hillslope transport. In this study, a linear diffusion equation for slow, quasi-continuous mass movement (e.g., creep), which is based on a large data compilation, is adopted in the hillslope model. Transport relations for rapid, episodic mass movements are based on an extensive data set covering a 40-yr period from the Queen Charlotte Islands, British Columbia. A hyperbolic tangent relation, in which transport increases nonlinearly with gradient above some threshold gradient, provided the best fit to the data. Model runs were undertaken for typical hillslope profiles found in small drainage basins in the Queen Charlotte Islands. Results, based on linear diffusivity values defined in the present study, are compared to results based on diffusivities used in earlier studies. Linear diffusivities, adopted in several earlier studies, generally did not provide adequate approximations of hillslope evolution. The nonlinear transport relation was tested and found to provide acceptable simulations of hillslope evolution. Weathering is introduced into the final set of model runs. The incorporation of weathering into the model decreases the rate of hillslope change when theoretical rates of sediment transport exceed sediment supply. The incorporation of weathering into the model is essential to ensuring that transport rates at high gradients obtained in the model reasonably replicate conditions observed in real landscapes. An outline of landscape progression is proposed based on model results. Hillslope change initially occurs at a rapid rate following events that result in oversteepened gradients (e.g., tectonic forcing, glaciation, fluvial undercutting). Steep gradients are eventually eliminated and hillslope transport is reduced significantly.

  5. A comparison of linear and non-linear data assimilation methods using the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Kirchgessner, Paul; Tödter, Julian; Nerger, Lars

    2015-04-01

    The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.

  6. Using Quartile-Quartile Lines as Linear Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2015-01-01

    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…

  7. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  8. Asymptotic modeling of assemblies of thin linearly elastic plates

    NASA Astrophysics Data System (ADS)

    Licht, Christian

    2007-12-01

    We derive various models of assemblies of thin linearly elastic plates by abutting or superposition through an asymptotic analysis taking into account small parameters associated with the size and the stiffness of the adhesive. They correspond to the linkage of two Kirchhoff-Love plates by a mechanical constraint which strongly depends on the magnitudes of the previous parameters. To cite this article: C. Licht, C. R. Mecanique 335 (2007).

  9. NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS.

    SciTech Connect

    TOMAS,R.FISCHER,W.JAIN,A.LUO,Y.PILAT,F.

    2004-07-05

    For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability.

  10. LINEAR MODELS FOR MANAGING SOURCES OF GROUNDWATER POLLUTION.

    USGS Publications Warehouse

    Gorelick, Steven M.; Gustafson, Sven-Ake; ,

    1984-01-01

    Mathematical models for the problem of maintaining a specified groundwater quality while permitting solute waste disposal at various facilities distributed over space are discussed. The pollutants are assumed to be chemically inert and their concentrations in the groundwater are governed by linear equations for advection and diffusion. The aim is to determine a disposal policy which maximises the total amount of pollutants released during a fixed time T while meeting the condition that the concentration everywhere is below prescribed levels.

  11. Feature Modeling in Underwater Environments Using Sparse Linear Combinations

    DTIC Science & Technology

    2010-01-01

    waters . Optics Express, 16(13), 2008. 2, 3 [9] J. Jaflfe. Monte carlo modeling of underwate-image forma- tion: Validity of the linear and small-angle... turbid water , etc), we would like to determine if these two images contain the same (or similar) object(s). One approach is as follows: 1. Detect...nearest neighbor methods on extracted feature descriptors This methodology works well for clean, out-of- water images, however, when imaging underwater

  12. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure

  13. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    PubMed

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG < 0) and endothermic (ΔH > 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  14. On the Development of Parameterized Linear Analytical Longitudinal Airship Models

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.

    2008-01-01

    In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.

  15. Modelling human balance using switched systems with linear feedback control

    PubMed Central

    Kowalczyk, Piotr; Glendinning, Paul; Brown, Martin; Medrano-Cerda, Gustavo; Dallali, Houman; Shapiro, Jonathan

    2012-01-01

    We are interested in understanding the mechanisms behind and the character of the sway motion of healthy human subjects during quiet standing. We assume that a human body can be modelled as a single-link inverted pendulum, and the balance is achieved using linear feedback control. Using these assumptions, we derive a switched model which we then investigate. Stable periodic motions (limit cycles) about an upright position are found. The existence of these limit cycles is studied as a function of system parameters. The exploration of the parameter space leads to the detection of multi-stability and homoclinic bifurcations. PMID:21697168

  16. Application of linear gauss pseudospectral method in model predictive control

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Zhou, Hao; Chen, Wanchun

    2014-03-01

    This paper presents a model predictive control(MPC) method aimed at solving the nonlinear optimal control problem with hard terminal constraints and quadratic performance index. The method combines the philosophies of the nonlinear approximation model predictive control, linear quadrature optimal control and Gauss Pseudospectral method. The current control is obtained by successively solving linear algebraic equations transferred from the original problem via linearization and the Gauss Pseudospectral method. It is not only of high computational efficiency since it does not need to solve nonlinear programming problem, but also of high accuracy though there are a few discrete points. Therefore, this method is suitable for on-board applications. A design of terminal impact with a specified direction is carried out to evaluate the performance of this method. Augmented PN guidance law in the three-dimensional coordinate system is applied to produce the initial guess. And various cases for target with straight-line movements are employed to demonstrate the applicability in different impact angles. Moreover, performance of the proposed method is also assessed by comparison with other guidance laws. Simulation results indicate that this method is not only of high computational efficiency and accuracy, but also applicable in the framework of guidance design.

  17. Wavefront Sensing for WFIRST with a Linear Optical Model

    NASA Technical Reports Server (NTRS)

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  18. Wavefront sensing for WFIRST with a linear optical model

    NASA Astrophysics Data System (ADS)

    Jurling, Alden S.; Content, David A.

    2012-09-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  19. Repopulation Kinetics and the Linear-Quadratic Model

    NASA Astrophysics Data System (ADS)

    O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.

    2009-08-01

    The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.

  20. THE SEPARATION OF URANIUM ISOTOPES BY GASEOUS DIFFUSION: A LINEAR PROGRAMMING MODEL,

    DTIC Science & Technology

    URANIUM, ISOTOPE SEPARATION), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), MATHEMATICAL MODELS, GAS FLOW, NUCLEAR REACTORS, OPERATIONS RESEARCH

  1. Generalized linear mixed model for segregation distortion analysis

    PubMed Central

    2011-01-01

    Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575

  2. Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.

    ERIC Educational Resources Information Center

    Belgard, Maria R.; Min, Leo Yoon-Gee

    An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…

  3. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.

  4. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  5. Model predictive control of a combined heat and power plant using local linear models

    SciTech Connect

    Kikstra, J.F.; Roffel, B.; Schoen, P.

    1998-10-01

    Model predictive control has been applied to control of a combined heat and power plant. One of the main features of this plant is that it exhibits nonlinear process behavior due to large throughput swings. In this application, the operating window of the plant has been divided into a number of smaller windows in which the nonlinear process behavior has been approximated by linear behavior. For each operating window, linear step weight models were developed from a detailed nonlinear first principles model, and the model prediction is calculated based on interpolation between these linear models. The model output at each operating point can then be calculated from four basic linear models, and the required control action can subsequently be calculated with the standard model predictive control approach using quadratic programming.

  6. A Structured Model Reduction Method for Linear Interconnected Systems

    NASA Astrophysics Data System (ADS)

    Sato, Ryo; Inoue, Masaki; Adachi, Shuichi

    2016-09-01

    This paper develops a model reduction method for a large-scale interconnected system that consists oflinear dynamic components. In the model reduction, we aim to preserve physical characteristics of each component. To this end, we formulate a structured model reduction problem that reduces the model order of components while preserving the feedback structure. Although there are a few conventional methods for such structured model reduction to preserve stability, they do not explicitly consider performance of the reduced-order feedback system. One of the difficulties in the problem with performance guarantee comes from nonlinearity of a feedback system to each component. The problem is essentially in a class of nonlinear optimization problems, and therefore it cannot be efficiently solved even in numerical computation. In this paper, application of an equivalent transformation and a proper approximation reduces this nonlinear problem to a problem of the weighted linear model reduction. Then, by using the weighted balanced truncation technique, we construct a reduced-order model with preserving the feedback structure to ensure small modeling error. Finally, we verify the effectiveness of the proposed method through numerical experiments.

  7. Model of intermodulation distortion in non-linear multicarrier systems

    NASA Astrophysics Data System (ADS)

    Frigo, Nicholas J.

    1994-02-01

    A heuristic model is proposed which allows calculation of the individual spectral components of the intermodulation distortion present in a non-linear system with a multicarrier input. Noting that any given intermodulation product (IMP) can only be created by a subset of the input carriers, we partition them into 'signal' carriers (which create the IMP) and 'noise' carriers, modeled as a Gaussian process. The relationship between an input signal and the statistical average of its output (averaged over the Gaussian noise) is considered to be an effective transfer function. By summing all possible combinations of signal carriers which create power at the IMP frequencies, the distortion power can be calculated exactly as a function of frequency. An analysis of clipping in lightwave CATV links for AM-VSB signals is used to introduce the model, and is compared to a series of experiments.

  8. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  9. Adjusting power for a baseline covariate in linear models

    PubMed Central

    Glueck, Deborah H.; Muller, Keith E.

    2009-01-01

    SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543

  10. Model light curves of linear Type II supernovae

    SciTech Connect

    Swartz, D.A.; Wheeler, J.C.; Harkness, R.P. )

    1991-06-01

    Light curves computed from hydrodynamic models of supernova are compared graphically with the average observed B and V-band light curves of linear Type II supernovae. Models are based on the following explosion scenarios: carbon deflagration within a C + O core near the Chandrasekhar mass, electron-capture-induced core collapse of an O-Ne-Mg core of the Chandrasekhar mass, and collapse of an Fe core in a massive star. A range of envelope mass, initial radius, and composition is investigated. Only a narrow range of values of these parameters are consistent with observations. Within this narrow range, most of the observed light curve properties can be obtained in part, but none of the models can reproduce the entire light curve shape and absolute magnitude over the full 200 day comparison period. The observed lack of a plateau phase is explained in terms of a combination of small envelope mass and envelope helium enhancement. The final cobalt tail phase of the light curve can be reproduced only if the mass of explosively synthesized radioactive Ni-56 is small. The results presented here, in conjunction with the observed homogeneity among individual members of the supernova subclass, argue favorably for the O-Ne-Mg core collapse mechanism as an explanation for linear Type II supernovae. The Crab Nebula may arisen from such an explosion. Carbon deflagrations may lead to brighter events like SN 1979C. 62 refs.

  11. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    PubMed Central

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  12. Inverse magnetic catalysis in the linear sigma model

    NASA Astrophysics Data System (ADS)

    Ayala, A.; Loewe, M.; Zamora, R.

    2016-05-01

    We compute the critical temperature for the chiral transition in the background of a magnetic field in the linear sigma model, including the quark contribution and the thermo-magnetic effects induced on the coupling constants at one loop level. For the analysis, we go beyond mean field aproximation, by taking one loop thermo-magnetic corrections to the couplings as well as plasma screening effects for the boson's masses, expressed through the ring diagrams. We found inverse magnetic catalysis, i.e. a decreasing of the critical chiral temperature as function of the intensity of the magnetic field, which seems to be in agreement with recent results from the lattice community.

  13. Imbedding linear regressions in models for factor crossing

    NASA Astrophysics Data System (ADS)

    Santos, Carla; Nunes, Célia; Dias, Cristina; Varadinov, Maria; Mexia, João T.

    2016-12-01

    Given u factors with J1, …, Ju levels we are led to test their effects and interactions. For this we consider an orthogonal partition of Rn, with n =∏l=1uJl, in subspaces associated with the sets of factors. The space corresponding to the set C will have density g (C )=∏l∈C(Jl-1) so that g({1, …, u}) will be much larger than the other number of degrees of freedom when Jl > 2, l = 1, …, u This fact may be used to enrich these models imbedding in them linear regressions.

  14. Modeling of Linear Gas Tungsten Arc Welding of Stainless Steel

    NASA Astrophysics Data System (ADS)

    Maran, P.; Sornakumar, T.; Sundararajan, T.

    2008-08-01

    A heat and fluid flow model has been developed to solve the gas tungsten arc (GTA) linear welding problem for austenitic stainless steel. The moving heat source problem associated with the electrode traverse has been simplified into an equivalent two-dimensional (2-D) transient problem. The torch residence time has been calculated from the arc diameter and torch speed. The mathematical formulation considers buoyancy, electromagnetic induction, and surface tension forces. The governing equations have been solved by the finite volume method. The temperature and velocity fields have been determined. The theoretical predictions for weld bead geometry are in good agreement with experimental measurements.

  15. Linear unmixing using endmember subspaces and physics based modeling

    NASA Astrophysics Data System (ADS)

    Gillis, David; Bowles, Jeffrey; Ientilucci, Emmett J.; Messinger, David W.

    2007-09-01

    One of the biggest issues with the Linear Mixing Model (LMM) is that it is implicitly assumed that each of the individual material components throughout the scene may be described using a single dimension (e.g. an endmember vector). In reality, individual pixels corresponding to the same general material class can exhibit a large degree of variation within a given scene. This is especially true in broad background classes such as forests, where the single dimension assumption clearly fails. In practice, the only way to account for the multidimensionality of the class is to choose multiple (very similar) endmembers, each of which represents some part of the class. To address these issues, we introduce the endmember subgroup model, which generalizes the notion of an 'endmember vector' to an 'endmember subspace'. In this model, spectra in a given hyperspectral scene are decomposed as a sum of constituent materials; however, each material is represented by some multidimensional subspace (instead of a single vector). The dimensionality of the subspace will depend on the within-class variation seen in the image. The endmember subgroups can be determined automatically from the data, or can use physics-based modeling techniques to include 'signature subspaces', which are included in the endmember subgroups. In this paper, we give an overview of the subgroup model; discuss methods for determining the endmember subgroups for a given image, and present results showing how the subgroup model improves upon traditional single endmember linear mixing. We also include results that use the 'signature subspace' approach to identifying mixed-pixel targets in HYDICE imagery.

  16. Filtering nonlinear dynamical systems with linear stochastic models

    NASA Astrophysics Data System (ADS)

    Harlim, J.; Majda, A. J.

    2008-06-01

    An important emerging scientific issue is the real time filtering through observations of noisy signals for nonlinear dynamical systems as well as the statistical accuracy of spatio-temporal discretizations for filtering such systems. From the practical standpoint, the demand for operationally practical filtering methods escalates as the model resolution is significantly increased. For example, in numerical weather forecasting the current generation of global circulation models with resolution of 35 km has a total of billions of state variables. Numerous ensemble based Kalman filters (Evensen 2003 Ocean Dyn. 53 343-67 Bishop et al 2001 Mon. Weather Rev. 129 420-36 Anderson 2001 Mon. Weather Rev. 129 2884-903 Szunyogh et al 2005 Tellus A 57 528-45 Hunt et al 2007 Physica D 230 112-26) show promising results in addressing this issue; however, all these methods are very sensitive to model resolution, observation frequency, and the nature of the turbulent signals when a practical limited ensemble size (typically less than 100) is used. In this paper, we implement a radical filtering approach to a relatively low (40) dimensional toy model, the L-96 model (Lorenz 1996 Proc. on Predictability (ECMWF, 4-8 September 1995) pp 1-18) in various chaotic regimes in order to address the 'curse of ensemble size' for complex nonlinear systems. Practically, our approach has several desirable features such as extremely high computational efficiency, filter robustness towards variations of ensemble size (we found that the filter is reasonably stable even with a single realization) which makes it feasible for high dimensional problems, and it is independent of any tunable parameters such as the variance inflation coefficient in an ensemble Kalman filter. This radical filtering strategy decouples the problem of filtering a spatially extended nonlinear deterministic system to filtering a Fourier diagonal system of parametrized linear stochastic differential equations (Majda and Grote

  17. Non-Linear Slosh Damping Model Development and Validation

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  18. Linear mixed effects models under inequality constraints with applications.

    PubMed

    Farnan, Laura; Ivanova, Anastasia; Peddada, Shyamal D

    2014-01-01

    Constraints arise naturally in many scientific experiments/studies such as in, epidemiology, biology, toxicology, etc. and often researchers ignore such information when analyzing their data and use standard methods such as the analysis of variance (ANOVA). Such methods may not only result in a loss of power and efficiency in costs of experimentation but also may result poor interpretation of the data. In this paper we discuss constrained statistical inference in the context of linear mixed effects models that arise naturally in many applications, such as in repeated measurements designs, familial studies and others. We introduce a novel methodology that is broadly applicable for a variety of constraints on the parameters. Since in many applications sample sizes are small and/or the data are not necessarily normally distributed and furthermore error variances need not be homoscedastic (i.e. heterogeneity in the data) we use an empirical best linear unbiased predictor (EBLUP) type residual based bootstrap methodology for deriving critical values of the proposed test. Our simulation studies suggest that the proposed procedure maintains the desired nominal Type I error while competing well with other tests in terms of power. We illustrate the proposed methodology by re-analyzing a clinical trial data on blood mercury level. The methodology introduced in this paper can be easily extended to other settings such as nonlinear and generalized regression models.

  19. Acoustic FMRI noise: linear time-invariant system model.

    PubMed

    Rizzo Sierra, Carlos V; Versluis, Maarten J; Hoogduin, Johannes M; Duifhuis, Hendrikus Diek

    2008-09-01

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For auditory system studies, however, the acoustic noise generated by the scanner tends to interfere with the assessments of this activation. Understanding and modeling fMRI acoustic noise is a useful step to its reduction. To study acoustic noise, the MR scanner is modeled as a linear electroacoustical system generating sound pressure signals proportional to the time derivative of the input gradient currents. The transfer function of one MR scanner is determined for two different input specifications: 1) by using the gradient waveform calculated by the scanner software and 2) by using a recording of the gradient current. Up to 4 kHz, the first method is shown as reliable as the second one, and its use is encouraged when direct measurements of gradient currents are not possible. Additionally, the linear order and average damping properties of the gradient coil system are determined by impulse response analysis. Since fMRI is often based on echo planar imaging (EPI) sequences, a useful validation of the transfer function prediction ability can be obtained by calculating the acoustic output for the EPI sequence. We found a predicted sound pressure level (SPL) for the EPI sequence of 104 dB SPL compared to a measured value of 102 dB SPL. As yet, the predicted EPI pressure waveform shows similarity as well as some differences with the directly measured EPI pressure waveform.

  20. Linear versus quadratic portfolio optimization model with transaction cost

    NASA Astrophysics Data System (ADS)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  1. Some generalisations of linear-graph modelling for dynamic systems

    NASA Astrophysics Data System (ADS)

    de Silva, Clarence W.; Pourazadi, Shahram

    2013-11-01

    Proper modelling of a dynamic system can benefit analysis, simulation, design, evaluation and control of the system. The linear-graph (LG) approach is suitable for modelling lumped-parameter dynamic systems. By using the concepts of graph trees, it provides a graphical representation of the system, with a direct correspondence to the physical component topology. This paper systematically extends the application of LGs to multi-domain (mixed-domain or multi-physics) dynamic systems by presenting a unified way to represent different domains - mechanical, electrical, thermal and fluid. Preservation of the structural correspondence across domains is a particular advantage of LGs when modelling mixed-domain systems. The generalisation of Thevenin and Norton equivalent circuits to mixed-domain systems, using LGs, is presented. The structure of an LG model may follow a specific pattern. Vector LGs are introduced to take advantage of such patterns, giving a general LG representation for them. Through these vector LGs, the model representation becomes simpler and rather compact, both topologically and parametrically. A new single LG element is defined to facilitate the modelling of distributed-parameter (DP) systems. Examples are presented using multi-domain systems (a motion-control system and a flow-controlled pump), a multi-body mechanical system (robot manipulator) and DP systems (structural rods) to illustrate the application and advantages of the methodologies developed in the paper.

  2. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  3. Pointwise Description for the Linearized Fokker-Planck-Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Wu, Kung-Chien

    2015-09-01

    In this paper, we study the pointwise (in the space variable) behavior of the linearized Fokker-Planck-Boltzmann model for nonsmooth initial perturbations. The result reveals both the fluid and kinetic aspects of this model. The fluid-like waves are constructed as the long-wave expansion in the spectrum of the Fourier modes for the space variable, and it has polynomial time decay rate. We design a Picard-type iteration for constructing the increasingly regular kinetic-like waves, which are carried by the transport equations and have exponential time decay rate. The Mixture Lemma plays an important role in constructing the kinetic-like waves, this lemma was originally introduced by Liu-Yu (Commun Pure Appl Math 57:1543-1608, 2004) for Boltzmann equation, but the Fokker-Planck term in this paper creates some technical difficulties.

  4. Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2010-01-01

    The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.

  5. Robust cross-validation of linear regression QSAR models.

    PubMed

    Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny

    2008-10-01

    A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets.

  6. On the unnecessary ubiquity of hierarchical linear modeling.

    PubMed

    McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D

    2017-03-01

    In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record

  7. Linear-Nonlinear-Poisson Models of Primate Choice Dynamics

    PubMed Central

    Corrado, Greg S; Sugrue, Leo P; Sebastian Seung, H; Newsome, William T

    2005-01-01

    The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency. PMID:16596981

  8. Electroweak corrections and unitarity in linear moose models

    SciTech Connect

    Chivukula, R. Sekhar; Simmons, Elizabeth H.; He, H.-J.; Kurachi, Masafumi; Tanabashi, Masaharu

    2005-02-01

    We calculate the form of the corrections to the electroweak interactions in the class of Higgsless models which can be deconstructed to a chain of SU(2) gauge groups adjacent to a chain of U(1) gauge groups, and with the fermions coupled to any single SU(2) group and to any single U(1) group along the chain. The primary advantage of our technique is that the size of corrections to electroweak processes can be directly related to the spectrum of vector bosons ('KK modes'). In Higgsless models, this spectrum is constrained by unitarity. Our methods also allow for arbitrary background 5D geometry, spatially dependent gauge-couplings, and brane kinetic energy terms. We find that, due to the size of corrections to electroweak processes in any unitary theory, Higgsless models with localized fermions are disfavored by precision electroweak data. Although we stress our results as they apply to continuum Higgsless 5D models, they apply to any linear moose model including those with only a few extra vector bosons. Our calculations of electroweak corrections also apply directly to the electroweak gauge sector of 5D theories with a bulk scalar Higgs boson; the constraints arising from unitarity do not apply in this case.

  9. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods.

  10. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the

  11. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  12. Modeling Electric Vehicle Benefits Connected to Smart Grids

    SciTech Connect

    Stadler, Michael; Marnay, Chris; Mendes, Goncalo; Kloess, Maximillian; Cardoso, Goncalo; Mégel, Olivier; Siddiqui, Afzal

    2011-07-01

    Connecting electric storage technologies to smartgrids will have substantial implications in building energy systems. Local storage will enable demand response. Mobile storage devices in electric vehicles (EVs) are in direct competition with conventional stationary sources at the building. EVs will change the financial as well as environmental attractiveness of on-site generation (e.g. PV, or fuel cells). In order to examine the impact of EVs on building energy costs and CO2 emissions in 2020, a distributed-energy-resources adoption problem is formulated as a mixed-integer linear program with minimization of annual building energy costs or CO2 emissions. The mixed-integer linear program is applied to a set of 139 different commercial buildings in California and example results as well as the aggregated economic and environmental benefits are reported. The research shows that considering second life of EV batteries might be very beneficial for commercial buildings.

  13. Direction of Effects in Multiple Linear Regression Models.

    PubMed

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  14. Feedbacks, climate sensitivity, and the limits of linear models

    NASA Astrophysics Data System (ADS)

    Rugenstein, M.; Knutti, R.

    2015-12-01

    The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.

  15. A linear geospatial streamflow modeling system for data sparse environments

    USGS Publications Warehouse

    Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James

    2008-01-01

    In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.

  16. Linear model for fast background subtraction in oligonucleotide microarrays

    PubMed Central

    2009-01-01

    Background One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. Results We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. Conclusion The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry. PMID:19917117

  17. Gauged linear sigma model and pion-pion scattering

    SciTech Connect

    Fariborz, Amir H.; Schechter, Joseph; Shahid, M. Naeem

    2009-12-01

    A simple gauged linear sigma model with several parameters to take the symmetry breaking and the mass differences between the vector meson and the axial vector meson into account is considered here as a possibly useful 'template' for the role of a light scalar in QCD as well as for (at a different scale) an effective Higgs sector for some recently proposed walking technicolor models. An analytic procedure is first developed for relating the Lagrangian parameters to four well established (in the QCD application) experimental inputs. One simple equation distinguishes three different cases: i. QCD with axial vector particle heavier than vector particle, ii. possible technicolor model with vector particle heavier than the axial vector one, iii. the unphysical QCD case where both the Kawarabayashi-Suzuki-Riazuddin-Fayazuddin and Weinberg relations hold. The model is applied to the s-wave pion-pion scattering in QCD. Both the near threshold region and (with an assumed unitarization) the 'global' region up to about 800 MeV are considered. It is noted that there is a little tension between the choice of 'bare' sigma mass parameter for describing these two regions. If a reasonable 'global' fit is made, there is some loss of precision in the near threshold region.

  18. Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.

    PubMed

    Figura, Simon; Livingstone, David M; Kipfer, Rolf

    2015-01-01

    Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.

  19. The linear Ising model and its analytic continuation, random walk

    NASA Astrophysics Data System (ADS)

    Lavenda, B. H.

    2004-02-01

    A generalization of Gauss's principle is used to derive the error laws corresponding to Types II and VII distributions in Pearson's classification scheme. Student's r-p.d.f. (Type II) governs the distribution of the internal energy of a uniform, linear chain, Ising model, while the analytic continuation of the uniform exchange energy converts it into a Student t-density (Type VII) for the position of a random walk in a single spatial dimension. Higher-dimensional spaces, corresponding to larger degrees of freedom and generalizations to multidimensional Student r- and t-densities, are obtained by considering independent and identically random variables, having rotationally invariant densities, whose entropies are additive and generating functions are multiplicative.

  20. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  1. Preconditioning the bidomain model with almost linear complexity

    NASA Astrophysics Data System (ADS)

    Pierre, Charles

    2012-01-01

    The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.

  2. A Linear City Model with Asymmetric Consumer Distribution

    PubMed Central

    Azar, Ofer H.

    2015-01-01

    The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984

  3. Simulating annual glacier flow with a linear reservoir model

    NASA Astrophysics Data System (ADS)

    Span, Norbert; Kuhn, Michael

    2003-05-01

    In this paper we present a numerical simulation of the observation that most alpine glaciers have reached peak velocities in the early 1980s followed by nearly exponential decay of velocity in the subsequent decade. We propose that similarity exists between precipitation and associated runoff hydrograph in a river basin on one side and annual mean specific mass balance of the accumulation area of alpine glaciers and ensuing changes in ice flow on the other side. The similarity is expressed in terms of a linear reservoir with fluctuating input where the year to year change of ice velocity is governed by two terms, a fraction of the velocity of the previous year as a recession term and the mean specific balance of the accumulation area of the current year as a driving term. The coefficients of these terms directly relate to the timescale, the mass balance/altitude profile, and the geometric scale of the glacier. The model is well supported by observations in the upper part of the glacier where surface elevation stays constant to within ±5 m over a 30 year period. There is no temporal trend in the agreement between observed and modeled horizontal velocities and no difference between phases of acceleration and phases of deceleration, which means that the model is generally valid for a given altitude on a given glacier.

  4. Comparison of Linear and Non-Linear Regression Models to Estimate Leaf Area Index of Dryland Shrubs.

    NASA Astrophysics Data System (ADS)

    Dashti, H.; Glenn, N. F.; Ilangakoon, N. T.; Mitchell, J.; Dhakal, S.; Spaete, L.

    2015-12-01

    Leaf area index (LAI) is a key parameter in global ecosystem studies. LAI is considered a forcing variable in land surface processing models since ecosystem dynamics are highly correlated to LAI. In response to environmental limitations, plants in semiarid ecosystems have smaller leaf area, making accurate estimation of LAI by remote sensing a challenging issue. Optical remote sensing (400-2500 nm) techniques to estimate LAI are based either on radiative transfer models (RTMs) or statistical approaches. Considering the complex radiation field of dry ecosystems, simple 1-D RTMs lead to poor results, and on the other hand, inversion of more complex 3-D RTMs is a demanding task which requires the specification of many variables. A good alternative to physical approaches is using methods based on statistics. Similar to many natural phenomena, there is a non-linear relationship between LAI and top of canopy electromagnetic waves reflected to optical sensors. Non-linear regression models can better capture this relationship. However, considering the problem of a few numbers of observations in comparison to the feature space (nmodels will not necessarily outperform the more simple linear models. In this study linear versus non-linear regression techniques were investigated to estimate LAI. Our study area is located in southwestern Idaho, Great Basin. Sagebrush (Artemisia tridentata spp) serves a critical role in maintaining the structure of this ecosystem. Using a leaf area meter (Accupar LP-80), LAI values were measured in the field. Linear Partial Least Square regression and non-linear, tree based Random Forest regression have been implemented to estimate the LAI of sagebrush from hyperspectral data (AVIRIS-ng) collected in late summer 2014. Cross validation of results indicate that PLS can provide comparable results to Random Forest.

  5. Optimal CH-47 AND C-130 Workload Balance

    DTIC Science & Technology

    2011-03-01

    42 LINGO -based Model Development .......................................................................42 Summary...58 Appendix A. LINGO -Based Model...scenario and network is the first step towards developing a mixed integer linear program in the LINGO ® software environment. The program development

  6. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  7. Fitting a linear-linear piecewise growth mixture model with unknown knots: A comparison of two common approaches to inference.

    PubMed

    Kohli, Nidhi; Hughes, John; Wang, Chun; Zopluoglu, Cengiz; Davison, Mark L

    2015-06-01

    A linear-linear piecewise growth mixture model (PGMM) is appropriate for analyzing segmented (disjointed) change in individual behavior over time, where the data come from a mixture of 2 or more latent classes, and the underlying growth trajectories in the different segments of the developmental process within each latent class are linear. A PGMM allows the knot (change point), the time of transition from 1 phase (segment) to another, to be estimated (when it is not known a priori) along with the other model parameters. To assist researchers in deciding which estimation method is most advantageous for analyzing this kind of mixture data, the current research compares 2 popular approaches to inference for PGMMs: maximum likelihood (ML) via an expectation-maximization (EM) algorithm, and Markov chain Monte Carlo (MCMC) for Bayesian inference. Monte Carlo simulations were carried out to investigate and compare the ability of the 2 approaches to recover the true parameters in linear-linear PGMMs with unknown knots. The results show that MCMC for Bayesian inference outperformed ML via EM in nearly every simulation scenario. Real data examples are also presented, and the corresponding computer codes for model fitting are provided in the Appendix to aid practitioners who wish to apply this class of models.

  8. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated

  9. Linear models for sound from supersonic reacting mixing layers

    NASA Astrophysics Data System (ADS)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  10. Fourth standard model family neutrino at future linear colliders

    SciTech Connect

    Ciftci, A.K.; Ciftci, R.; Sultansoy, S.

    2005-09-01

    It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.

  11. Estimating population trends with a linear model: Technical comments

    USGS Publications Warehouse

    Sauer, John R.; Link, William A.; Royle, J. Andrew

    2004-01-01

    Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.

  12. Non linear dynamics of flame cusps: from experiments to modeling

    NASA Astrophysics Data System (ADS)

    Almarcha, Christophe; Radisson, Basile; Al-Sarraf, Elias; Quinard, Joel; Villermaux, Emmanuel; Denet, Bruno; Joulin, Guy

    2016-11-01

    The propagation of premixed flames in a medium initially at rest exhibits the appearance and competition of elementary local singularities called cusps. We investigate this problem both experimentally and numerically. An analytical solution of the two-dimensional Michelson Sivashinsky equation is obtained as a composition of pole solutions, which is compared with experimental flames fronts propagating between glass plates separated by a thin gap width. We demonstrate that the front dynamics can be reproduced numerically with a good accuracy, from the linear stages of destabilization to its late time evolution, using this model-equation. In particular, the model accounts for the experimentally observed steady distribution of distances between cusps, which is well-described by a one-parameter Gamma distribution, reflecting the aggregation type of interaction between the cusps. A modification of the Michelson Sivashinsky equation taking into account gravity allows to reproduce some other special features of these fronts. Aix-Marseille Univ., IRPHE, UMR 7342 CNRS, Centrale Marseille, Technopole de Château Gombert, 49 rue F. Joliot Curie, 13384 Marseille Cedex 13, France.

  13. Linear System Models for Ultrasonic Imaging: Application to Signal Statistics

    PubMed Central

    Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.

    2009-01-01

    Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176

  14. Formal modeling and verification of fractional order linear systems.

    PubMed

    Zhao, Chunna; Shi, Likun; Guan, Yong; Li, Xiaojuan; Shi, Zhiping

    2016-05-01

    This paper presents a formalization of a fractional order linear system in a higher-order logic (HOL) theorem proving system. Based on the formalization of the Grünwald-Letnikov (GL) definition, we formally specify and verify the linear and superposition properties of fractional order systems. The proof provides a rigor and solid underpinnings for verifying concrete fractional order linear control systems. Our implementation in HOL demonstrates the effectiveness of our approach in practical applications.

  15. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    PubMed

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique.

  16. Modeling Seismoacoustic Propagation from the Nonlinear to Linear Regimes

    NASA Astrophysics Data System (ADS)

    Chael, E. P.; Preston, L. A.

    2015-12-01

    Explosions at shallow depth-of-burial can cause nonlinear material response, such as fracturing and spalling, up to the ground surface above the shot point. These motions at the surface affect the generation of acoustic waves into the atmosphere, as well as the surface-reflected compressional and shear waves. Standard source scaling models for explosions do not account for such nonlinear interactions above the shot, while some recent studies introduce a non-isotropic addition to the moment tensor to represent them (e.g., Patton and Taylor, 2011). We are using Sandia's CTH shock physics code to model the material response in the vicinity of underground explosions, up to the overlying ground surface. Across a boundary where the motions have decayed to nearly linear behavior, we couple the signals from CTH into a linear finite-difference (FD) seismoacoustic code to efficiently propagate the wavefields to greater distances. If we assume only one-way transmission of energy through the boundary, then the particle velocities there suffice as inputs for the FD code, simplifying the specification of the boundary condition. The FD algorithm we use applies the wave equations for velocity in an elastic medium and pressure in an acoustic one, and matches the normal traction and displacement across the interface. Initially we are developing and testing a 2D, axisymmetric seismoacoustic routine; CTH can use this geometry in the source region as well. The Source Physics Experiment (SPE) in Nevada has collected seismic and acoustic data on numerous explosions at different scaled depths, providing an excellent testbed for investigating explosion phenomena (Snelson et al., 2013). We present simulations for shots SPE-4' and SPE-5, illustrating the importance of nonlinear behavior up to the ground surface. Our goal is to develop the capability for accurately predicting the relative signal strengths in the air and ground for a given combination of source yield and depth. Sandia National

  17. Amplitude relations in non-linear sigma model

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Du, Yi-Jian

    2014-01-01

    In this paper, we investigate tree-level scattering amplitude relations in U( N) non-linear sigma model. We use Cayley parametrization. As was shown in the recent works [23,24], both on-shell amplitudes and off-shell currents with odd points have to vanish under Cayley parametrization. We prove the off-shell U(1) identity and fundamental BCJ relation for even-point currents. By taking the on-shell limits of the off-shell relations, we show that the color-ordered tree amplitudes with even points satisfy U(1)-decoupling identity and fundamental BCJ relation, which have the same formations within Yang-Mills theory. We further state that all the on-shell general KK, BCJ relations as well as the minimal-basis expansion are also satisfied by color-ordered tree amplitudes. As a consequence of the relations among color-ordered amplitudes, the total 2 m-point tree amplitudes satisfy DDM form of color decomposition as well as KLT relation.

  18. Process Setting through General Linear Model and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Senjuntichai, Angsumalin

    2010-10-01

    The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.

  19. Markov Boundary Discovery with Ridge Regularized Linear Models

    PubMed Central

    Visweswaran, Shyam

    2016-01-01

    Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915

  20. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  1. Analyzing Measurement Models of Latent Variables through Multilevel Confirmatory Factor Analysis and Hierarchical Linear Modeling Approaches.

    ERIC Educational Resources Information Center

    Li, Fuzhong; Duncan, Terry E.; Harmer, Peter; Acock, Alan; Stoolmiller, Mike

    1998-01-01

    Discusses the utility of multilevel confirmatory factor analysis and hierarchical linear modeling methods in testing measurement models in which the underlying attribute may vary as a function of levels of observation. A real dataset is used to illustrate the two approaches and their comparability. (SLD)

  2. Solving large double digestion problems for DNA restriction mapping by using branch-and-bound integer linear programming.

    PubMed

    Wu, Z; Zhang, Y

    2008-01-01

    The double digestion problem for DNA restriction mapping has been proved to be NP-complete and intractable if the numbers of the DNA fragments become large. Several approaches to the problem have been tested and proved to be effective only for small problems. In this paper, we formulate the problem as a mixed-integer linear program (MIP) by following (Waterman, 1995) in a slightly different form. With this formulation and using state-of-the-art integer programming techniques, we can solve randomly generated problems whose search space sizes are many-magnitude larger than previously reported testing sizes.

  3. A log-linear multidimensional Rasch model for capture-recapture.

    PubMed

    Pelle, E; Hessen, D J; van der Heijden, P G M

    2016-02-20

    In this paper, a log-linear multidimensional Rasch model is proposed for capture-recapture analysis of registration data. In the model, heterogeneity of capture probabilities is taken into account, and registrations are viewed as dichotomously scored indicators of one or more latent variables that can account for correlations among registrations. It is shown how the probability of a generic capture profile is expressed under the log-linear multidimensional Rasch model and how the parameters of the traditional log-linear model are derived from those of the log-linear multidimensional Rasch model. Finally, an application of the model to neural tube defects data is presented.

  4. Modeling of driver's collision avoidance maneuver based on controller switching model.

    PubMed

    Kim, Jong-Hae; Hayakawa, Soichiro; Suzuki, Tatsuya; Hayashi, Koji; Okuma, Shigeru; Tsuchida, Nuio; Shimizu, Masayuki; Kido, Shigeyuki

    2005-12-01

    This paper presents a modeling strategy of human driving behavior based on the controller switching model focusing on the driver's collision avoidance maneuver. The driving data are collected by using the three-dimensional (3-D) driving simulator based on the CAVE Automatic Virtual Environment (CAVE), which provides stereoscopic immersive virtual environment. In our modeling, the control scenario of the human driver, that is, the mapping from the driver's sensory information to the operation of the driver such as acceleration, braking, and steering, is expressed by Piecewise Polynomial (PWP) model. Since the PWP model includes both continuous behaviors given by polynomials and discrete logical conditions, it can be regarded as a class of Hybrid Dynamical System (HDS). The identification problem for the PWP model is formulated as the Mixed Integer Linear Programming (MILP) by transforming the switching conditions into binary variables. From the obtained results, it is found that the driver appropriately switches the "control law" according to the sensory information. In addition, the driving characteristics of the beginner driver and the expert driver are compared and discussed. These results enable us to capture not only the physical meaning of the driving skill but the decision-making aspect (switching conditions) in the driver's collision avoidance maneuver as well.

  5. Comparison of linear and non-linear blade model predictions in Bladed to measurement data from GE 6MW wind turbine

    NASA Astrophysics Data System (ADS)

    Collier, W.; Milian Sanz, J.

    2016-09-01

    The length and flexibility of wind turbine blades are increasing over time. Typically, the dynamic response of the blades is analysed using linear models of blade deflection, enhanced by various ad-hoc non-linear correction models. For blades undergoing large deflections, the small deflection assumption inherent to linear models becomes less valid. It has previously been demonstrated that linear and nonlinear blade models can show significantly different blade response, particularly for blade torsional deflection, leading to load prediction differences. There is a need to evaluate how load predictions from these two approaches compare to measurement data from the field. In this paper, time domain simulations in turbulent wind are carried out using the aero-elastic code Bladed with linear and non-linear blade deflection models. The turbine blade load and deflection simulation results are compared to measurement data from an onshore prototype of the GE 6MW Haliade turbine, which features 73.5m long LM blades. Both linear and non-linear blade models show a good match to measurement turbine load and blade deflections. Only the blade loads differ significantly between the two models, with other turbine loads not strongly affected. The non-linear blade model gives a better match to the measured blade root flapwise damage equivalent load, suggesting that the flapwise dynamic behaviour is better captured by the non-linear blade model. Conversely, the linear blade model shows a better match to measurements in some areas such as blade edgewise damage equivalent load.

  6. Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis

    DTIC Science & Technology

    2016-10-11

    portfolio deleveraging with market impact,” (Jingnan Chen, Liming Feng, Jiming Peng, Yinyu Ye), Operations Research, 62(1) (2014) 195-206. In this...Stanley 2012 Prize for Excellence in Financial Markets , First runner-up. “Space tensor conic programming,” (L Qi and Y Ye), Computational Optimization...portfolio deleveraging with market impact," (J. Chen, L. Feng, J. Peng, Y. Ye), Operations Research, 62(1) (2014) 195-206. "Simultaneous Beam Sampling

  7. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2012-01-01

    In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.

  8. The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation

    ERIC Educational Resources Information Center

    Brown, Scott D.; Heathcote, Andrew

    2008-01-01

    We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…

  9. Linear Friction Welding Process Model for Carpenter Custom 465 Precipitation-Hardened Martensitic Stainless Steel

    DTIC Science & Technology

    2014-04-11

    Carpenter Custom 465 precipitation-hardened martensitic stainless steel to develop a linear friction welding (LFW) process model for this material...Model for Carpenter Custom 465 Precipitation-Hardened Martensitic Stainless Steel The views, opinions and/or findings contained in this report are...Carpenter Custom 465 precipitation-hardened martensiticstainless steel , linear friction welding, process modeling REPORT DOCUMENTATION PAGE 11

  10. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    NASA Astrophysics Data System (ADS)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  11. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    SciTech Connect

    Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate

  12. Linear programming model to develop geodiversity map using utility theory

    NASA Astrophysics Data System (ADS)

    Sepehr, Adel

    2015-04-01

    In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.

  13. A general approach to mixed effects modeling of residual variances in generalized linear mixed models

    PubMed Central

    Kizilkaya, Kadir; Tempelman, Robert J

    2005-01-01

    We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567

  14. Results and Comparison from the SAM Linear Fresnel Technology Performance Model: Preprint

    SciTech Connect

    Wagner, M. J.

    2012-04-01

    This paper presents the new Linear Fresnel technology performance model in NREL's System Advisor Model. The model predicts the financial and technical performance of direct-steam-generation Linear Fresnel power plants, and can be used to analyze a range of system configurations. This paper presents a brief discussion of the model formulation and motivation, and provides extensive discussion of the model performance and financial results. The Linear Fresnel technology is also compared to other concentrating solar power technologies in both qualitative and quantitative measures. The Linear Fresnel model - developed in conjunction with the Electric Power Research Institute - provides users with the ability to model a variety of solar field layouts, fossil backup configurations, thermal receiver designs, and steam generation conditions. This flexibility aims to encompass current market solutions for the DSG Linear Fresnel technology, which is seeing increasing exposure in fossil plant augmentation and stand-alone power generation applications.

  15. Linear relaxation in large two-dimensional Ising models

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.

    2016-02-01

    Critical dynamics in two-dimension Ising lattices up to 2048 ×2048 is simulated on field-programmable-gate-array- based computing devices. Linear relaxation times are measured from extremely long Monte Carlo simulations. The longest simulation has 7.1 ×1016 spin updates, which would take over 37 years to simulate on a general purpose computer. The linear relaxation time of the Ising lattices is found to follow the dynamic scaling law for correlation lengths as long as 2048. The dynamic exponent z of the system is found to be 2.179(12), which is consistent with previous studies of Ising lattices with shorter correlation lengths. It is also found that Monte Carlo simulations of critical dynamics in Ising lattices larger than 512 ×512 are very sensitive to the statistical correlations between pseudorandom numbers, making it even more difficult to study such large systems.

  16. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  17. Quantum criticality of the two-channel pseudogap Anderson model: universal scaling in linear and non-linear conductance.

    PubMed

    Wu, Tsan-Pei; Wang, Xiao-Qun; Guo, Guang-Yu; Anders, Frithjof; Chung, Chung-Hou

    2016-05-05

    The quantum criticality of the two-lead two-channel pseudogap Anderson impurity model is studied. Based on the non-crossing approximation (NCA) and numerical renormalization group (NRG) approaches, we calculate both the linear and nonlinear conductance of the model at finite temperatures with a voltage bias and a power-law vanishing conduction electron density of states, ρc(ω) proportional |ω − μF|(r) (0 < r < 1) near the Fermi energy μF. At a fixed lead-impurity hybridization, a quantum phase transition from the two-channel Kondo (2CK) to the local moment (LM) phase is observed with increasing r from r = 0 to r = rc < 1. Surprisingly, in the 2CK phase, different power-law scalings from the well-known [Formula: see text] or [Formula: see text] form is found. Moreover, novel power-law scalings in conductances at the 2CK-LM quantum critical point are identified. Clear distinctions are found on the critical exponents between linear and non-linear conductance at criticality. The implications of these two distinct quantum critical properties for the non-equilibrium quantum criticality in general are discussed.

  18. Optimal Scaling of Interaction Effects in Generalized Linear Models

    ERIC Educational Resources Information Center

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  19. Modeling thermal sensation in a Mediterranean climate-a comparison of linear and ordinal models.

    PubMed

    Pantavou, Katerina; Lykoudis, Spyridon

    2014-08-01

    A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.

  20. Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina; Lykoudis, Spyridon

    2014-08-01

    A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.

  1. Model reference adaptive control for linear time varying and nonlinear systems

    NASA Technical Reports Server (NTRS)

    Abida, L.; Kaufman, H.

    1982-01-01

    Model reference adaptive control is applied to linear time varying systems and to nonlinear systems amenable to virtual linearization. Asymptotic stability is guaranteed even if the perfect model following conditions do not hold, provided that some sufficient conditions are satisfied. Simulations show the scheme to be capable of effectively controlling certain nonlinear systems.

  2. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  3. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    ERIC Educational Resources Information Center

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2012-01-01

    Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…

  4. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  5. Development of a Linear Stirling System Model with Varying Heat Inputs

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2007-01-01

    The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.

  6. Linear moose model with pairs of degenerate gauge boson triplets

    NASA Astrophysics Data System (ADS)

    Casalbuoni, Roberto; Coradeschi, Francesco; de Curtis, Stefania; Dominici, Daniele

    2008-05-01

    The possibility of a strongly interacting electroweak symmetry breaking sector, as opposed to the weakly interacting light Higgs of the standard model, is not yet ruled out by experiments. In this paper we make an extensive study of a deconstructed model (or “moose” model) providing an effective description of such a strong symmetry breaking sector, and show its compatibility with experimental data for a wide portion of the model parameter space. The model is a direct generalization of the previously proposed D-BESS model.

  7. Linear moose model with pairs of degenerate gauge boson triplets

    SciTech Connect

    Casalbuoni, Roberto; Coradeschi, Francesco; De Curtis, Stefania; Dominici, Daniele

    2008-05-01

    The possibility of a strongly interacting electroweak symmetry breaking sector, as opposed to the weakly interacting light Higgs of the standard model, is not yet ruled out by experiments. In this paper we make an extensive study of a deconstructed model (or ''moose'' model) providing an effective description of such a strong symmetry breaking sector, and show its compatibility with experimental data for a wide portion of the model parameter space. The model is a direct generalization of the previously proposed D-BESS model.

  8. Output feedback model matching in linear impulsive systems with control feedthrough: a structural approach

    NASA Astrophysics Data System (ADS)

    Zattoni, Elena

    2017-01-01

    This paper investigates the problem of structural model matching by output feedback in linear impulsive systems with control feedthrough. Namely, given a linear impulsive plant, possibly featuring an algebraic link from the control input to the output, and given a linear impulsive model, the problem consists in finding a linear impulsive regulator that achieves exact matching between the respective forced responses of the linear impulsive plant and of the linear impulsive model, for all the admissible input functions and all the admissible sequences of jump times, by means of a dynamic feedback of the plant output. The problem solvability is characterized by a necessary and sufficient condition. The regulator synthesis is outlined through the proof of sufficiency, which is constructive.

  9. The linear-quadratic model is inappropriate to model high dose per fraction effects in radiosurgery.

    PubMed

    Kirkpatrick, John P; Meyer, Jeffrey J; Marks, Lawrence B

    2008-10-01

    The linear-quadratic (LQ) model is widely used to model the effect of total dose and dose per fraction in conventionally fractionated radiotherapy. Much of the data used to generate the model are obtained in vitro at doses well below those used in radiosurgery. Clinically, the LQ model often underestimates tumor control observed at radiosurgical doses. The underlying mechanisms implied by the LQ model do not reflect the vascular and stromal damage produced at the high doses per fraction encountered in radiosurgery and ignore the impact of radioresistant subpopulations of cells. The appropriate modeling of both tumor control and normal tissue toxicity in radiosurgery requires the application of emerging understanding of molecular-, cellular-, and tissue-level effects of high-dose/fraction-ionizing radiation and the role of cancer stem cells.

  10. ROMS Tangent Linear and Adjoint Models: Testing and Applications

    DTIC Science & Technology

    2001-09-30

    long-term scientific goal is to model and predict the mesoscale circulation and the ecosystem response to physical forcing in the various regions of the world ocean through ROMS primitive equation modeling/assimilation.

  11. ROMS Tangent Linear and Adjoint Models: Testing and Applications

    DTIC Science & Technology

    2002-09-30

    long-term scientific goal is to model and predict the mesoscale circulation and the ecosystem response to physical forcing in the various regions of the world ocean through ROMS primitive equation modeling/assimilation.

  12. Analysis of operating principles with S-system models.

    PubMed

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O

    2011-05-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature.

  13. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  14. A deterministic aggregate production planning model considering quality of products

    NASA Astrophysics Data System (ADS)

    Madadi, Najmeh; Yew Wong, Kuan

    2013-06-01

    Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.

  15. Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.

    PubMed

    Elliott, Michael R

    2009-03-01

    In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.

  16. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  17. Computational models of signalling networks for non-linear control.

    PubMed

    Fuente, Luis A; Lones, Michael A; Turner, Alexander P; Stepney, Susan; Caves, Leo S; Tyrrell, Andy M

    2013-05-01

    Artificial signalling networks (ASNs) are a computational approach inspired by the signalling processes inside cells that decode outside environmental information. Using evolutionary algorithms to induce complex behaviours, we show how chaotic dynamics in a conservative dynamical system can be controlled. Such dynamics are of particular interest as they mimic the inherent complexity of non-linear physical systems in the real world. Considering the main biological interpretations of cellular signalling, in which complex behaviours and robust cellular responses emerge from the interaction of multiple pathways, we introduce two ASN representations: a stand-alone ASN and a coupled ASN. In particular we note how sophisticated cellular communication mechanisms can lead to effective controllers, where complicated problems can be divided into smaller and independent tasks.

  18. A computational methodology for learning low-complexity surrogate models of process from experiments or simulations. (Paper 679a)

    SciTech Connect

    Cozad, A.; Sahinidis, N.; Miller, D.

    2011-01-01

    Costly and/or insufficiently robust simulations or experiments can often pose difficulties when their use extends well beyond a single evaluation. This is case with the numerous evaluations of uncertainty quantification, when an algebraic model is needed for optimization, as well as numerous other areas. To overcome these difficulties, we generate an accurate set of algebraic surrogate models of disaggregated process blocks of the experiment or simulation. We developed a method that uses derivative-based and derivative-free optimization alongside machine learning and statistical techniques to generate the set of surrogate models using data sampled from experiments or detailed simulations. Our method begins by building a low-complexity surrogate model for each block from an initial sample set. The model is built using a best subset technique that leverages a mixed-integer linear problem formulation to allow for very large initial basis sets. The models are then tested, exploited, and improved through the use of derivative-free solvers to adaptively sample new simulation or experimental points. The sets of surrogate models from each disaggregated process block are then combined with heat and mass balances around each disaggregated block to generate a full algebraic model of the process. The full model can be used for cheap and accurate evaluations of the original simulation or experiment or combined with design specifications and an objective for nonlinear optimization.

  19. Fault Detection and Model Identification in Linear Dynamical Systems

    DTIC Science & Technology

    2001-02-01

    fault detection and isolation (FDI). One avenue of FDI is via the multi-model approach, in which the parameters of the nominal, unfailed model of the system are known, as well as the parameters of one or more fault models. The design goal is to obtain an indicator for when a fault has occurred, and, when more than one type is possible, which type of fault it is. A choice that must be made in tile system design is how to model noise. One way is as a bounded energy signal. This approach places very few restrictions on the types of noisy systems which

  20. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  1. A linearized and incompressible constitutive model for arteries.

    PubMed

    Liu, Y; Zhang, W; Wang, C; Kassab, G S

    2011-10-07

    In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke's law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young's moduli E₁ (circumferential), E₂ (axial), and E₃ (radial) are necessary to fit the data. The advantages and limitations of this model are discussed.

  2. Semi-physical neural modeling for linear signal restoration.

    PubMed

    Bourgois, Laurent; Roussel, Gilles; Benjelloun, Mohammed

    2013-02-01

    This paper deals with the design methodology of an Inverse Neural Network (INN) model. The basic idea is to carry out a semi-physical model gathering two types of information: the a priori knowledge of the deterministic rules which govern the studied system and the observation of the actual conduct of this system obtained from experimental data. This hybrid model is elaborated by being inspired by the mechanisms of a neuromimetic network whose structure is constrained by the discrete reverse-time state-space equations. In order to validate the approach, some tests are performed on two dynamic models. The first suggested model is a dynamic system characterized by an unspecified r-order Ordinary Differential Equation (ODE). The second one concerns in particular the mass balance equation for a dispersion phenomenon governed by a Partial Differential Equation (PDE) discretized on a basic mesh. The performances are numerically analyzed in terms of generalization, regularization and training effort.

  3. Using multiple linear regression model to estimate thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Putro, W. S.

    2017-03-01

    This paper is aimed to develop a numerical model with the use of a nonlinear model to estimate the thunderstorm activity. Meteorological data such as Pressure (P), Temperature (T), Relative Humidity (H), cloud (C), Precipitable Water Vapor (PWV), and precipitation on a daily basis were used in the proposed method. The model was constructed with six configurations of input and one target output. The output tested in this work is the thunderstorm event when one-year data is used. Results showed that the model works well in estimating thunderstorm activities with the maximum epoch reaching 1000 iterations and the percent error was found below 50%. The model also found that the thunderstorm activities in May and October are detected higher than the other months due to the inter-monsoon season.

  4. Modeling of thermal storage systems in MILP distributed energy resource models

    DOE PAGES

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculations aremore » based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less

  5. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  6. Linear summation of outputs in a balanced network model of motor cortex.

    PubMed

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  7. The puzzle of apparent linear lattice artifacts in the 2d non-linear σ-model and Symanzik's solution

    NASA Astrophysics Data System (ADS)

    Balog, Janos; Niedermayer, Ferenc; Weisz, Peter

    2010-01-01

    Lattice artifacts in the 2d O( n) non-linear σ-model are expected to be of the form O(a), and hence it was (when first observed) disturbing that some quantities in the O(3) model with various actions show parametrically stronger cutoff dependence, apparently O(a), up to very large correlation lengths. In a previous letter Balog et al. (2009) [1] we described the solution to this puzzle. Based on the conventional framework of Symanzik's effective action, we showed that there are logarithmic corrections to the O(a) artifacts which are especially large ( lna) for n=3 and that such artifacts are consistent with the data. In this paper we supply the technical details of this computation. Results of Monte Carlo simulations using various lattice actions for O(3) and O(4) are also presented.

  8. Non-linear modelling and optimal control of a hydraulically actuated seismic isolator test rig

    NASA Astrophysics Data System (ADS)

    Pagano, Stefano; Russo, Riccardo; Strano, Salvatore; Terzo, Mario

    2013-02-01

    This paper investigates the modelling, parameter identification and control of an unidirectional hydraulically actuated seismic isolator test rig. The plant is characterized by non-linearities such as the valve dead zone and frictions. A non-linear model is derived and then employed for parameter identification. The results concerning the model validation are illustrated and they fully confirm the effectiveness of the proposed model. The testing procedure of the isolation systems is based on the definition of a target displacement time history of the sliding table and, consequently, the precision of the table positioning is of primary importance. In order to minimize the test rig tracking error, a suitable control system has to be adopted. The system non-linearities highly limit the performances of the classical linear control and a non-linear one is therefore adopted. The test rig mathematical model is employed for a non-linear control design that minimizes the error between the target table position and the current one. The controller synthesis is made by taking no specimen into account. The proposed approach consists of a non-linear optimal control based on the state-dependent Riccati equation (SDRE). Numerical simulations have been performed in order to evaluate the soundness of the designed control with and without the specimen under test. The results confirm that the performances of the proposed non-linear controller are not invalidated because of the presence of the specimen.

  9. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  10. Direct-Steam Linear Fresnel Performance Model for NREL's System Advisor Model

    SciTech Connect

    Wagner, M. J.; Zhu, G.

    2012-09-01

    This paper presents the technical formulation and demonstrated model performance results of a new direct-steam-generation (DSG) model in NREL's System Advisor Model (SAM). The model predicts the annual electricity production of a wide range of system configurations within the DSG Linear Fresnel technology by modeling hourly performance of the plant in detail. The quasi-steady-state formulation allows users to investigate energy and mass flows, operating temperatures, and pressure drops for geometries and solar field configurations of interest. The model includes tools for heat loss calculation using either empirical polynomial heat loss curves as a function of steam temperature, ambient temperature, and wind velocity, or a detailed evacuated tube receiver heat loss model. Thermal losses are evaluated using a computationally efficient nodal approach, where the solar field and headers are discretized into multiple nodes where heat losses, thermal inertia, steam conditions (including pressure, temperature, enthalpy, etc.) are individually evaluated during each time step of the simulation. This paper discusses the mathematical formulation for the solar field model and describes how the solar field is integrated with the other subsystem models, including the power cycle and optional auxiliary fossil system. Model results are also presented to demonstrate plant behavior in the various operating modes.

  11. Modeling results for a linear simulator of a divertor

    SciTech Connect

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-06-23

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.

  12. Fokker-Planck Modelling of PISCES Linear Divertor Simulator

    NASA Astrophysics Data System (ADS)

    Batishchev, O. V.; Krasheninnikov, S. I.; Schmitz, L.

    1996-11-01

    The gas target operating regime in the PISCES [1] linear divertor simulator is characterized by a relatively high plasma density, 2.5 × 10^19 m-3, and low temperature, 8 eV, in the middle section of an ≈ 1 m long plasma column. Near the target, the plasma temperature and density as measured by Langmuir probes drop to 2 eV and 3.5 × 10^18 m-3, respectively, as a result of electron energy loss due to dissociation, ionization, and radiation. Such a sharp gradient in the plasma parameters can enhance non-local effects. To study these, we performed kinetic simulations of the relaxation of the electron energy distribution function on the experimentally measured background plasma using the adaptive finite-volumes code ALLA [2]. We discuss the effects of the observed incompletely equilibrated electron distribution function on key plasma parameter measurements and plasma - neutral particle interactions. cm [1] L.Schmitz et al., Physics of Plasmas 2 (1995) 3081. cm [2] A.A.Batishcheva et al., Physics of Plasmas 3 (1996) 1634. cm ^*Under U.S. DoE Contracts No.DE-FG02-91-ER-54109 at MIT, DE-FG02-88-ER-53263 at Lodestar, and DE-FG03-95ER54301 at UCSD.

  13. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  14. Genetic demixing and evolution in linear stepping stone models

    PubMed Central

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-01-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q-allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  15. Genetic demixing and evolution in linear stepping stone models

    NASA Astrophysics Data System (ADS)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  16. A Linearized and Incompressible Constitutive Model for Arteries

    PubMed Central

    Liu, Y.; Zhang, W.; Wang, C.; Kassab, G. S.

    2011-01-01

    In many biomechanical studies, blood vessels can be modeled as pseudoelastic orthotropic materials that are incompressible (volume-preserving) under physiological loading. To use a minimum number of elastic constants to describe the constitutive behavior of arteries, we adopt a generalized Hooke’s law for the co-rotational Cauchy stress and a recently proposed logarithmic-exponential strain. This strain tensor absorbs the material nonlinearity and its trace is zero for volume-preserving deformations. Thus, the relationships between model parameters due to the incompressibility constraint are easy to analyze and interpret. In particular, the number of independent elastic constants reduces from ten to seven in the orthotropic model. As an illustratory study, we fit this model to measured data of porcine coronary arteries in inflation-stretch tests. Four parameters, n (material nonlinearity), Young’s moduli E1 (circumferential), E2 (axial), and E3 (radial) are necessary to fit the data. The advantages and limitations of this model are discussed. PMID:21605567

  17. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  18. Downscaling of rainfall in Peru using Generalised Linear Models

    NASA Astrophysics Data System (ADS)

    Bergin, E.; Buytaert, W.; Onof, C.; Wheater, H.

    2012-04-01

    The assessment of water resources in the Peruvian Andes is particularly important because the Peruvian economy relies heavily on agriculture. Much of the agricultural land is situated near to the coast and relies on large quantities of water for irrigation. The simulation of synthetic rainfall series is thus important to evaluate the reliability of water supplies for current and future scenarios of climate change. In addition to water resources concerns, there is also a need to understand extreme heavy rainfall events, as there was significant flooding in Machu Picchu in 2010. The region exhibits a reduction of rainfall in 1983, associated with El Nino Southern Oscillation (SOI). NCEP Reanalysis 1 data was used to provide weather variable data. Correlations were calculated for several weather variables using raingauge data in the Andes. These were used to evaluate teleconnections and provide suggested covariates for the downscaling model. External covariates used in the model include sea level pressure and sea surface temperature over the region of the Humboldt Current. Relative humidity and temperature data over the region are also included. The SOI teleconnection is also used. Covariates are standardised using observations for 1960-1990. The GlimClim downscaling model was used to fit a stochastic daily rainfall model to 13 sites in the Peruvian Andes. Results indicate that the model is able to reproduce rainfall statistics well, despite the large area used. Although the correlation between individual rain gauges is generally quite low, all sites are affected by similar weather patterns. This is an assumption of the GlimClim downscaling model. Climate change scenarios are considered using several GCM outputs for the A1B scenario. GCM data was corrected for bias using 1960-1990 outputs from the 20C3M scenario. Rainfall statistics for current and future scenarios are compared. The region shows an overall decrease in mean rainfall but with an increase in variance.

  19. Modeling taper charge with a non-linear equation

    NASA Technical Reports Server (NTRS)

    Mcdermott, P. P.

    1985-01-01

    Work aimed at modeling the charge voltage and current characteristics of nickel-cadmium cells subject to taper charge is presented. Work reported at previous NASA Battery Workshops has shown that the voltage of cells subject to constant current charge and discharge can be modeled very accurately with the equation: voltage = A + (B/(C-X)) + De to the -Ex where A, B, D, and E are fit parameters and x is amp-hr of charge removed during discharge or returned during charge. In a constant current regime, x is also equivalent to time on charge or discharge.

  20. Computation of linear acceleration through an internal model in the macaque cerebellum

    PubMed Central

    Laurens, Jean; Meng, Hui; Angelaki, Dora E.

    2013-01-01

    A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562

  1. Computation of linear acceleration through an internal model in the macaque cerebellum.

    PubMed

    Laurens, Jean; Meng, Hui; Angelaki, Dora E

    2013-11-01

    A combination of theory and behavioral findings support a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Using unnatural motion stimuli, which induce incorrect self-motion perception and eye movements, we explored the neural correlates of an internal model that has been proposed to compensate for Einstein's equivalence principle and generate neural estimates of linear acceleration and gravity. We found that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encoded erroneous linear acceleration, as would be expected from the internal model hypothesis, even when no actual linear acceleration occurred. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized.

  2. Reconstruction of Linear and Non-Linear Continuous-Time System Models from Input/output Data Using the Kernel Invariance Algorithm

    NASA Astrophysics Data System (ADS)

    BILLINGS, S. A.; LI, L. M.

    2000-06-01

    A new kernel invariance algorithm (KIA) is introduced to determine both the significant model terms and estimate the unknown parameters in non-linear continuous-time differential equation models of unknown systems

  3. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  4. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  5. Relevance of the Hierarchical Linear Model to TIMSS Data Analyses.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    Multilevel international data have been released from the Third International Mathematics and Science Study (TIMSS), providing an opportunity to apply multilevel modeling techniques in educational research. In this paper, TIMSS factors are classified in fixed and random categories according to the project design. Classifying fixed and random…

  6. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  7. Linear Model to Assess the Scale's Validity of a Test

    ERIC Educational Resources Information Center

    Tristan, Agustin; Vidal, Rafael

    2007-01-01

    Wright and Stone had proposed three features to assess the quality of the distribution of the items difficulties in a test, on the so called "most probable response map": line, stack and gap. Once a line is accepted as a design model for a test, gaps and stacks are practically eliminated, producing an evidence of the "scale…

  8. Item Response Theory Using Hierarchical Generalized Linear Models

    ERIC Educational Resources Information Center

    Ravand, Hamdollah

    2015-01-01

    Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…

  9. Multivariate Linear Models of the Multitrait-Multimethod Matrix.

    ERIC Educational Resources Information Center

    Wothke, Werner

    Several multivariate statistical methodologies have been proposed to ensure objective and quantitative evaluation of the multitrait-multimethod matrix. The paper examines the performance of confirmatory factor analysis and covariance component models. It is shown, both empirically and formally, that confirmatory factor analysis is not a reliable…

  10. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  11. Predicting Tests Ordered in Hospital Laboratories using Generalized Linear Modeling.

    PubMed

    Leaven, Laquanda T

    2016-01-01

    Laboratory services in healthcare systems play a vital role in inpatient care. Most hospital laboratories are facing the challenge of reducing cost and improving service quality. The author focuses on identifying test order patterns in a laboratory for a large urban hospital. The data collected from this facility consists of all tests ordered over a three-month time frame and contains test orders for approximately 17,500 patients. Poisson and negative binomial regression models are used to determine how well patient characteristics (patient length of stay and the medical units in which patients are placed) will predict the number of tests being ordered. The test order prediction model developed in this study will aid the management and phlebotomists in the hospital laboratory in securing methods to satisfy the test order demand. By implementing the recommendations of this study, hospital laboratories should see significant improvements in phlebotomist productivity and resource utilization, implementation of which could result in cost savings.

  12. State Space Identification of Linear Deterministic Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Ramos, José; Mallants, Dirk; Feyen, Jan

    1995-06-01

    Rainfall-runoff models of the black box type abound in the water resources literature (i.e., transfer function, autoregressive moving average (ARMA), ARMAX, state space, etc.). The corresponding system identification algorithms for such models are known to be numerically efficient and accurate, leading in most cases to good parsimonious representations of the rainfall-runoff process. Alternatively, every model in transfer function, ARMA, and ARMAX form has an equivalent state space representation. However, state space models do not necessarily have simple system identification algorithms, unless the system matrices are restricted to some canonical form. Furthermore, state space system identification algorithms that work with the rainfall/runoff data directly (i.e., covariance free), require initial conditions and are inherently iterative and nonlinear. In this paper we present a state space system identification theory which overcomes these limitations. One advantage of such a theory is that the corresponding algorithms are highly robust to additive noise in the data. They are referred to as "subspace algorithms" due to their ability to separate the signal subspace from the noise subspace. The main advantages of the subspace algorithms are the automatic structure identification (system order), geometrical insights (notions of angle between subspaces), and the fact that they rely on robust numerical procedures (singular value decomposition). In this paper, two algorithms are presented. The first one is a two-step procedure, where the impulse response (unit hydrograph ordinates for the single-input, single-output case) are computed from the input/output data by solving a constrained deconvolution problem. These impulse response ordinates are then used as inputs for identifying the system matrices by means of a Hankel-based realization algorithm. The second approach uses the data directly to identify the system matrices, bypassing the deconvolution step. The

  13. Battery Life Estimator Manual Linear Modeling and Simulation

    SciTech Connect

    Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia

    2009-08-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  14. Vibration Model Validation for Linear Collider Detector Platforms

    SciTech Connect

    Bertsche, Kirk; Amann, J.W.; Markiewicz, T.W.; Oriunno, M.; Weidemann, A.; White, G.; /SLAC

    2012-05-16

    The ILC and CLIC reference designs incorporate reinforced-concrete platforms underneath the detectors so that the two detectors can each be moved onto and off of the beamline in a Push-Pull configuration. These platforms could potentially amplify ground vibrations, which would reduce luminosity. In this paper we compare vibration models to experimental data on reinforced concrete structures, estimate the impact on luminosity, and summarize implications for the design of a reinforced concrete platform for the ILC or CLIC detectors.

  15. Model Checking Linear-Time Properties of Probabilistic Systems

    NASA Astrophysics Data System (ADS)

    Baier, Christel; Größer, Marcus; Ciesinski, Frank

    This chapter is about the verification of Markov decision processes (MDPs) which incorporate one of the fundamental models for reasoning about probabilistic and nondeterministic phenomena in reactive systems. MDPs have their roots in the field of operations research and are nowadays used in a wide variety of areas including verification, robotics, planning, controlling, reinforcement learning, economics and semantics of randomized systems. Furthermore, MDPs served as the basis for the introduction of probabilistic automata which are related to weighted automata. We describe the use of MDPs as an operational model for randomized systems, e.g., systems that employ randomized algorithms, multi-agent systems or systems with unreliable components or surroundings. In this context we outline the theory of verifying ω-regular properties of such operational models. As an integral part of this theory we use ω-automata, i.e., finite-state automata over finite alphabets that accept languages of infinite words. Additionally, basic concepts of important reduction techniques are sketched, namely partial order reduction of MDPs and quotient system reduction of the numerical problem that arises in the verification of MDPs. Furthermore we present several undecidability and decidability results for the controller synthesis problem for partially observable MDPs.

  16. Structure of Vector Mesons in Holographic Model with Linear Confinement

    SciTech Connect

    Anatoly Radyushkin; Hovhannes Grigoryan

    2007-11-01

    We investigate wave functions and form factors of vector mesons in the holographic dual model of QCD with oscillator-like infrared cutoff. We introduce wave functions conjugate to solutions of the 5D equation of motion and develop a formalism based on these wave functions, which are very similar to those of a quantum-mechanical oscillator. For the lowest bound state (rho-meson), we show that all its elastic form factors can be built from the basic form factor which, in this model, exhibits a perfect vector meson dominance, i.e., is given by the rho-pole contribution alone. We calculate the electric radius of the rho-meson and find the value _C = 0.655 fm, which is larger than in the case of the hard-wall cutoff. We calculate the coupling constant f_rho and find that the experimental value is in the middle between the values given by the oscillator and hard-wall models.

  17. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh Q.

    1992-01-01

    Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.

  18. A Linearized k-ɛ Model of Forest Canopies and Clearings

    NASA Astrophysics Data System (ADS)

    Segalini, Antonio; Nakamura, Tetsuya; Fukagata, Koji

    2016-12-01

    A linearized analysis of the Reynolds-averaged Navier-Stokes (RANS) equations is proposed where the k-ɛ turbulence model is used. The flow near the forest is obtained as the superposition of the undisturbed incoming boundary layer plus a velocity perturbation due to the forest presence, similar to the approach proposed by Belcher et al. (J Fluid Mech 488:369-398, 2003). The linearized model has been compared against several non-linear RANS simulations with many leaf-area index values and large-eddy simulations using two different values of leaf-area index. All the simulations have been performed for a homogeneous forest and for four different clearing configurations. Despite the model approximations, the mean velocity and the Reynolds stress overline{u'w'} have been reasonably reproduced by the first-order model, providing insight about how the clearing perturbs the boundary layer over forested areas. However, significant departures from the linear predictions are observed in the turbulent kinetic energy and velocity variances. A second-order correction, which partly accounts for some non-linearities, is therefore proposed to improve the estimate of the turbulent kinetic energy and velocity variances. The results suggest that only a region close to the canopy top is significantly affected by the forest drag and dominated by the non-linearities, while above three canopy heights from the ground only small effects are visible and both the linearized model and the simulations have the same trends there.

  19. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  20. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  1. Isobio software: biological dose distribution and biological dose volume histogram from physical dose conversion using linear-quadratic-linear model

    PubMed Central

    Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit

    2017-01-01

    Purpose To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. Material and methods The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Results Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. Conclusions The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT. PMID:28344603

  2. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-04-03

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study.

  3. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS

    EPA Science Inventory

    As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...

  4. Huffman and linear scanning methods with statistical language models.

    PubMed

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  5. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis.

    PubMed

    Hemmati, Reza; Saboori, Hedayat

    2016-05-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics.

  6. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis

    PubMed Central

    Hemmati, Reza; Saboori, Hedayat

    2016-01-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics. PMID:27222741

  7. Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking

    DTIC Science & Technology

    2015-10-20

    412TW-PA-15238 Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking DANIEL T. LAIRD AIR...COVERED (From - To) 20 OCT 15 – 23 OCT 15 4. TITLE AND SUBTITLE Analysis of Covariance with Linear Regression Error Model on Antenna Control Tracking...analysis of variance (ANOVA) to decide for the null- or alternative-hypotheses of a telemetry antenna control unit’s (ACU) ability to track on C-band

  8. A model of a linear synchronous motor based on distribution theory

    NASA Astrophysics Data System (ADS)

    Trapanese, Marco

    2012-04-01

    The fundamental idea of this paper is to use the distribution theory to analyze linear machines in order to include in the mathematical model both ideal and non ideal features. This paper shows how distribution theory can be used to establish a mathematical model able to describe both the ordinary working condition of a Linear Synchronous Motor (LSM) as well the role of the unavoidable irregularities and non ideal features.

  9. Bi-Objective Modelling for Hazardous Materials Road-Rail Multimodal Routing Problem with Railway Schedule-Based Space-Time Constraints.

    PubMed

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-07-28

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study.

  10. Bi-Objective Modelling for Hazardous Materials Road–Rail Multimodal Routing Problem with Railway Schedule-Based Space–Time Constraints

    PubMed Central

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-01-01

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294

  11. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  12. A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1998-01-01

    Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.

  13. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  14. Predicting musically induced emotions from physiological inputs: linear and neural network models

    PubMed Central

    Russo, Frank A.; Vempala, Naresh N.; Sandstrom, Gillian M.

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of “felt” emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants—heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion. PMID:23964250

  15. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    ERIC Educational Resources Information Center

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  16. A Hierarchical Linear Model with Factor Analysis Structure at Level 2

    ERIC Educational Resources Information Center

    Miyazaki, Yasuo; Frank, Kenneth A.

    2006-01-01

    In this article the authors develop a model that employs a factor analysis structure at Level 2 of a two-level hierarchical linear model (HLM). The model (HLM2F) imposes a structure on a deficient rank Level 2 covariance matrix [tau], and facilitates estimation of a relatively large [tau] matrix. Maximum likelihood estimators are derived via the…

  17. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  18. Mechanistic model of radiation-induced cancer after fractionated radiotherapy using the linear-quadratic formula

    SciTech Connect

    Schneider, Uwe

    2009-04-15

    A simple mechanistic model for predicting cancer induction after fractionated radiotherapy is developed. The model is based upon the linear-quadratic model. The inductions of carcinomas and sarcomas are modeled separately. The linear-quadratic model of cell kill is applied to normal tissues which are unintentionally irradiated during a cancer treatment with radiotherapy. Tumor induction is modeled such that each transformation process results in a tumor cell. The microscopic transformation parameter was chosen such that in the limit of low dose and acute exposure, the parameters of the linear-no-threshold model for tumor induction were approached. The differential equations describing carcinoma and sarcoma inductions can be solved analytically. Cancer induction in this model is a function of treatment dose, the cell kill parameters ({alpha},{beta}), the tumor induction variable ({mu}), and the repopulation parameter ({xi}). Carcinoma induction shows a bell shaped behavior as long as cell repopulation is small. Assuming large cell repopulation rates, a plateaulike function is approached. In contrast, sarcoma induction is negligible for low doses and increases with increasing dose up to a constant value. The proposed model describes carcinoma and sarcoma inductions after fractionated radiotherapy as an analytical function of four parameters. In the limit of low dose and for an instant irradiation it reproduces the results of the linear-no-threshold model. The obtained dose-response curves for cancer induction can be implemented with other models such as the organ-equivalent dose model to predict second cancers after radiotherapy.

  19. A Graphical Method for Assessing the Identification of Linear Structural Equation Models

    ERIC Educational Resources Information Center

    Eusebi, Paolo

    2008-01-01

    A graphical method is presented for assessing the state of identifiability of the parameters in a linear structural equation model based on the associated directed graph. We do not restrict attention to recursive models. In the recent literature, methods based on graphical models have been presented as a useful tool for assessing the state of…

  20. Mathematical Modelling and the Learning Trajectory: Tools to Support the Teaching of Linear Algebra

    ERIC Educational Resources Information Center

    Cárcamo Bahamonde, Andrea Dorila; Fortuny Aymemí, Josep Maria; Gómez i Urgellés, Joan Vicenç

    2017-01-01

    In this article we present a didactic proposal for teaching linear algebra based on two compatible theoretical models: emergent models and mathematical modelling. This proposal begins with a problematic situation related to the creation and use of secure passwords, which leads students toward the construction of the concepts of spanning set and…

  1. An inexact dynamic optimization model for municipal solid waste management in association with greenhouse gas emission control.

    PubMed

    Lu, H W; Huang, G H; He, L; Zeng, G M

    2009-01-01

    Municipal solid waste (MSW) should be properly disposed in order to help protect environmental quality and human health, as well as to preserve natural resources. During MSW disposal processes, a large amount of greenhouse gas (GHG) is emitted, leading to a significant impact on climate change. In this study, an inexact dynamic optimization model (IDOM) is developed for MSW-management systems under uncertainty. It grounds upon conventional mixed-integer linear programming (MILP) approaches, and integrates GHG components into the modeling framework. Compared with the existing models, IDOM can not only deal with the complex tradeoff between system cost minimization and GHG-emission mitigation, but also provide optimal allocation strategies under various emission-control standards. A case study is then provided for demonstrating applicability of the developed model. The results indicate that desired waste-flow patterns with a minimized system cost and GHG-emission amount can be obtained. Of more importance, the IDOM solution is associated with over 5.5 million tonnes of TEC reduction, which is of significant economic implication for real implementations. Therefore, the proposed model could be regarded as a useful tool for realizing comprehensive MSW management with regard to mitigating climate-change impacts.

  2. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.

  3. Identifying optimal regional solid waste management strategies through an inexact integer programming model containing infinite objectives and constraints.

    PubMed

    He, Li; Huang, Guo-He; Zeng, Guang-Ming; Lu, Hong-Wei

    2009-01-01

    The previous inexact mixed-integer linear programming (IMILP) method can only tackle problems with coefficients of the objective function and constraints being crisp intervals, while the existing inexact mixed-integer semi-infinite programming (IMISIP) method can only deal with single-objective programming problems as it merely allows the number of constraints to be infinite. This study proposes, an inexact mixed-integer bi-infinite programming (IMIBIP) method by incorporating the concept of functional intervals into the programming framework. Different from the existing methods, the IMIBIP can tackle the inexact programming problems that contain both infinite objectives and constraints. The developed method is applied to capacity planning of waste management systems under a variety of uncertainties. Four scenarios are considered for comparing the solutions of IMIBIP with those of IMILP. The results indicate that reasonable solutions can be generated by the IMIBIP method. Compared with IMILP, the system cost from IMIBIP would be relatively high since the fluctuating market factors are considered; however, the IMILP solutions are associated with a raised system reliability level and a reduced constraint violation risk level.

  4. A model of the extent and distribution of woody linear features in rural Great Britain.

    PubMed

    Scholefield, Paul; Morton, Dan; Rowland, Clare; Henrys, Peter; Howard, David; Norton, Lisa

    2016-12-01

    Hedges and lines of trees (woody linear features) are important boundaries that connect and enclose habitats, buffer the effects of land management, and enhance biodiversity in increasingly impoverished landscapes. Despite their acknowledged importance in the wider countryside, they are usually not considered in models of landscape function due to their linear nature and the difficulties of acquiring relevant data about their character, extent, and location. We present a model which uses national datasets to describe the distribution of woody linear features along boundaries in Great Britain. The method can be applied for other boundary types and in other locations around the world across a range of spatial scales where different types of linear feature can be separated using characteristics such as height or width. Satellite-derived Land Cover Map 2007 (LCM2007) provided the spatial framework for locating linear features and was used to screen out areas unsuitable for their occurrence, that is, offshore, urban, and forest areas. Similarly, Ordnance Survey Land-Form PANORAMA®, a digital terrain model, was used to screen out where they do not occur. The presence of woody linear features on boundaries was modelled using attributes from a canopy height dataset obtained by subtracting a digital terrain map (DTM) from a digital surface model (DSM). The performance of the model was evaluated against existing woody linear feature data in Countryside Survey across a range of scales. The results indicate that, despite some underestimation, this simple approach may provide valuable information on the extents and locations of woody linear features in the countryside at both local and national scales.

  5. A Comparison of Linear versus Non-Linear Models of Aversive Self-Awareness, Dissociation, and Non-Suicidal Self-Injury among Young Adults

    ERIC Educational Resources Information Center

    Armey, Michael F.; Crowther, Janis H.

    2008-01-01

    Research has identified a significant increase in both the incidence and prevalence of non-suicidal self-injury (NSSI). The present study sought to test both linear and non-linear cusp catastrophe models by using aversive self-awareness, which was operationalized as a composite of aversive self-relevant affect and cognitions, and dissociation as…

  6. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  7. Iterated non-linear model predictive control based on tubes and contractive constraints.

    PubMed

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle.

  8. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.

  9. Challenges and models in supporting logistics system design for dedicated-biomass-based bioenergy industry.

    PubMed

    Zhu, Xiaoyan; Li, Xueping; Yao, Qingzhu; Chen, Yuerong

    2011-01-01

    This paper analyzed the uniqueness and challenges in designing the logistics system for dedicated biomass-to-bioenergy industry, which differs from the other industries, due to the unique features of dedicated biomass (e.g., switchgrass) including its low bulk density, restrictions on harvesting season and frequency, content variation with time and circumambient conditions, weather effects, scattered distribution over a wide geographical area, and so on. To design it, this paper proposed a mixed integer linear programming model. It covered from planting and harvesting switchgrass to delivering to a biorefinery and included the residue handling, concentrating on integrating strategic decisions on the supply chain design and tactical decisions on the annual operation schedules. The present numerical examples verified the model and demonstrated its use in practice. This paper showed that the operations of the logistics system were significantly different for harvesting and non-harvesting seasons, and that under the well-designed biomass logistics system, the mass production with a steady and sufficient supply of biomass can increase the unit profit of bioenergy. The analytical model and practical methodology proposed in this paper will help realize the commercial production in biomass-to-bioenergy industry.

  10. Phase Structure of the Non-Linear σ-MODEL with Oscillator Representation Method

    NASA Astrophysics Data System (ADS)

    Mishchenko, Yuriy; Ji, Chueng-R.

    2004-03-01

    Non-Linear σ-model plays an important role in many areas of theoretical physics. Been initially uintended as a simple model for chiral symmetry breaking, this model exhibits such nontrivial effects as spontaneous symmetry breaking, asymptotic freedom and sometimes is considered as an effective field theory for QCD. Besides, non-linear σ-model can be related to the strong-coupling limit of O(N) ϕ4-theory, continuous limit of N-dim. system of quantum spins, fermion gas and many others and takes important place in undertanding of how symmetries are realized in quantum field theories. Because of this variety of connections, theoretical study of the critical properties of σ-model is interesting and important. Oscillator representation method is a theoretical tool for studying the phase structure of simple QFT models. It is formulated in the framework of the canonical quantization and is based on the view of the unitary non-equivalent representations as possible phases of a QFT model. Successfull application of the ORM to ϕ4 and ϕ6 theories in 1+1 and 2+1 dimensions motivates its study in more complicated models such as non-linear σ-model. In our talk we introduce ORM, establish its connections with variational approach in QFT. We then present results of ORM in non-linear σ-model and try to interprete them from the variational point of view. Finally, we point out possible directions for further research in this area.

  11. Modeling and control of tubular solid-oxide fuel cell systems. I: Physical models and linear model reduction

    NASA Astrophysics Data System (ADS)

    Colclasure, Andrew M.; Sanandaji, Borhan M.; Vincent, Tyrone L.; Kee, Robert J.

    This paper describes the development of a transient model of an anode-supported, tubular solid-oxide fuel cell (SOFC). Physically based conservation equations predict the coupled effects of fuel channel flow, porous-media transport, heat transfer, thermal chemistry, and electrochemistry on cell performance. The model outputs include spatial and temporal profiles of chemical composition, temperature, velocity, and current density. Mathematically the model forms a system of differential-algebraic equations (DAEs), which is solved computationally. The model is designed with process-control applications in mind, although it can certainly be applied more widely. Although the physical model is computationally efficient, it is still too costly for incorporation directly into real-time process control. Therefore, system-identification techniques are used to develop reduced-order, locally linear models that can be incorporated directly into advanced control methodologies, such as model predictive control (MPC). The paper illustrates the physical model and the reduced-order linear state-space model with examples.

  12. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    NASA Technical Reports Server (NTRS)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  13. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis

    PubMed Central

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression. PMID:22973104

  14. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  15. The Quasi-Linear Equilibration of a Thermally Maintained, Stochastically Excited Jet in a Quasigeostrophic Model.

    NASA Astrophysics Data System (ADS)

    Delsole, Timothy; Farrell, Brian F.

    1996-07-01

    A theory for quasigeostrophic turbulence in baroclinic jets is examined in which interaction between the mean flow and the perturbations is explicitly modeled by the nonnormal operator obtained by linearization about the mean flow, while the eddy-eddy interactions are parameterized by a combination of stochastic excitation and effective dissipation. The quasi-linear equilibrium is the stationary state in dynamical balance between the mean flow forcing and eddy forcing produced by the linear stochastic model. The turbulence model depends on two parameters that specify the magnitude of the effective dissipation and stochastic excitation. The quasi-linear model produces heat fluxes (upgradient), momentum fluxes, and mean zonal winds, which are remarkably consistent with those produced by the nonlinear model over a wide range of parameter values despite energy and enstrophy imbalances associated with the parameterization for eddy-eddy interactions. The quasi-linear equilibrium also appears consistent with most aspects of the energy cycle, with baroclinic adjustment (though the adjustment is accomplished in a fundamentally different manner), and with the negative correlation between transient eddy transport and other transports observed in the atmosphere. The model overestimates the equilibrium eddy kinetic energy in cases in which it achieves correct eddy fluxes and energy balance. Understanding the role of forcing orthogonal functions rationalizes this behavior and provides the basis for addressing the role of transient eddies in climate.

  16. Two models of inventory control with supplier selection in case of multiple sourcing: a case of Isfahan Steel Company

    NASA Astrophysics Data System (ADS)

    Rabieh, Masood; Soukhakian, Mohammad Ali; Mosleh Shirazi, Ali Naghi

    2016-03-01

    Selecting the best suppliers is crucial for a company's success. Since competition is a determining factor nowadays, reducing cost and increasing quality of products are two key criteria for appropriate supplier selection. In the study, first the inventories of agglomeration plant of Isfahan Steel Company were categorized through VED and ABC methods. Then the models to supply two important kinds of raw materials (inventories) were developed, considering the following items: (1) the optimal consumption composite of the materials, (2) the total cost of logistics, (3) each supplier's terms and conditions, (4) the buyer's limitations and (5) the consumption behavior of the buyers. Among diverse developed and tested models—using the company's actual data within three pervious years—the two new innovative models of mixed-integer non-linear programming type were found to be most suitable. The results of solving two models by lingo software (based on company's data in this particular case) were equaled. Comparing the results of the new models to the actual performance of the company revealed 10.9 and 7.1 % reduction in total procurement costs of the company in two consecutive years.

  17. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    NASA Astrophysics Data System (ADS)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  18. Linearly first- and second-order, unconditionally energy stable schemes for the phase field crystal model

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Han, Daozhi

    2017-02-01

    In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank-Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposed schemes.

  19. Comparison of computationally frugal (linear) to expensive (nonlinear) methods for analyzing inverse modeling results

    NASA Astrophysics Data System (ADS)

    Mehl, S.; Foglia, L.; Hill, M. C.

    2009-12-01

    Methods for analyzing inverse modeling results can be separated into two categories: (1) linear methods, such as Cook’s D, which are computationally frugal and do not require additional model runs, and (2) nonlinear methods, such as cross validation, which are computationally more expensive because they generally require additional model runs. Depending on the type of nonlinear analysis performed, the additional runs can be the difference between 10’s of runs and 1000’s of runs. For example, cross-validation studies require the model to be recalibrated (the regression repeated) for each observation or set of observations analyzed. This can be computationally prohibitive if many observations or sets of observations are investigated and/or the model has many estimated parameters. A tradeoff exists between linear and nonlinear methods, with linear methods being computationally efficient, but the results being questioned when models are nonlinear. The trade offs between computational efficiency and accuracy are investigated by comparing results from several linear measures of observation importance (for example, Cook’s D, DFBETA’s) to their nonlinear counterparts based on cross validation. Examples from ground water models of the Maggia Valley in southern Switzerland are used to make comparisons. The models include representation of the stream-aquifer interaction and range from simple to complex, with associated modified Beale’s measure ranging from mildly nonlinear to highly nonlinear, respectively. These results demonstrate applicability and limitations of applying linear methods over a range of model complexity and linearity and can be used to better understand when the additional computation burden of nonlinear methods may be necessary.

  20. Linear Separability in Categorisation and Inference: A Test of the Johnson-Laird Falsity Model

    DTIC Science & Technology

    2014-01-01

    Johnson-Laird suggests that difficulties in problem solving can be explained by the mental models theory. This study tests linear seperability ...in 1991 working in the field of flight simulation, and he had a leading role in the acquisition and development of the Air Operations Simulation...Research Program LHS Land Human Sciences LOD Land Operations Division LS Linearly separable NLS Nonlinearly separable pη2 Partial eta-squared ( measure

  1. Vibration Stabilization of a Mechanical Model of a X-Band Linear Collider Final Focus Magnet

    SciTech Connect

    Frisch, Josef; Chang, Allison; Decker, Valentin; Doyle, Eric; Eriksson, Leif; Hendrickson, Linda; Himel, Thomas; Markiewicz, Thomas; Partridge, Richard; Seryi, Andrei; /SLAC

    2006-09-28

    The small beam sizes at the interaction point of a X-band linear collider require mechanical stabilization of the final focus magnets at the nanometer level. While passive systems provide adequate performance at many potential sites, active mechanical stabilization is useful if the natural or cultural ground vibration is higher than expected. A mechanical model of a room temperature linear collider final focus magnet has been constructed and actively stabilized with an accelerometer based system.

  2. Non-linear modeling using fuzzy principal component regression for Vidyaranyapuram sewage treatment plant, Mysore - India.

    PubMed

    Sulthana, Ayesha; Latha, K C; Imran, Mohammad; Rathan, Ramya; Sridhar, R; Balasubramanian, S

    2014-01-01

    Fuzzy principal component regression (FPCR) is proposed to model the non-linear process of sewage treatment plant (STP) data matrix. The dimension reduction of voluminous data was done by principal component analysis (PCA). The PCA score values were partitioned by fuzzy-c-means (FCM) clustering, and a Takagi-Sugeno-Kang (TSK) fuzzy model was built based on the FCM functions. The FPCR approach was used to predict the reduction in chemical oxygen demand (COD) and biological oxygen demand (BOD) of treated wastewater of Vidyaranyapuram STP with respect to the relations modeled between fuzzy partitioned PCA scores and target output. The designed FPCR model showed the ability to capture the behavior of non-linear processes of STP. The predicted values of reduction in COD and BOD were analyzed by performing the linear regression analysis. The predicted values for COD and BOD reduction showed positive correlation with the observed data.

  3. Hierarchical linear model: thinking outside the traditional repeated-measures analysis-of-variance box.

    PubMed

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C

    2015-04-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.

  4. Partially linear models with autoregressive scale-mixtures of normal errors: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Guillermo; Castro, Mauricio; Lachos, Victor H.

    2012-10-01

    Normality and independence of error terms is a typical assumption for partial linear models. However, such an assumption may be unrealistic on many fields such as economics, finance and biostatistics. In this paper, we develop a Bayesian analysis for partial linear model with first-order autoregressive errors belonging to the class of scale mixtures of normal (SMN) distributions. The proposed model provides a useful generalization of the symmetrical linear regression models with independent error, since the error distribution cover both correlated and thick-tailed distribution, and has a convenient hierarchical representation allowing to us an easily implementation of a Markov chain Monte Carlo (MCMC) scheme. In order to examine the robustness of this distribution against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler (K-L) divergence. The proposed methodology is applied to the Cuprum Company monthly returns.

  5. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  6. Formulation and validation of high-order linearized models of helicopter flight mechanics

    NASA Technical Reports Server (NTRS)

    Kim, Frederick D.; Celi, Roberto; Tischler, Mark B.

    1990-01-01

    A high-order linearized model of helicopter flight dynamics is extracted from a nonlinear time domain simulation. The model has 29 states that describe the fuselage rigid body degrees of freedom, the flap and lag dynamics in a nonrotating coordinate system, the inflow dynamics, the delayed entry of the horizontal tail into the main rotor wake, and, approximately, the blade torsion dynamics. The nonlinear simulation is obtained by extensively modifying the GENHEL computer program. The results indicate that the agreement between the linearized and the nonlinear model is good for small perturbations, and deteriorates for large amplitude maneuvers.

  7. Aboveground biomass and carbon stocks modelling using non-linear regression model

    NASA Astrophysics Data System (ADS)

    Ain Mohd Zaki, Nurul; Abd Latif, Zulkiflee; Nazip Suratman, Mohd; Zainee Zainal, Mohd

    2016-06-01

    Aboveground biomass (AGB) is an important source of uncertainty in the carbon estimation for the tropical forest due to the variation biodiversity of species and the complex structure of tropical rain forest. Nevertheless, the tropical rainforest holds the most extensive forest in the world with the vast diversity of tree with layered canopies. With the usage of optical sensor integrate with empirical models is a common way to assess the AGB. Using the regression, the linkage between remote sensing and a biophysical parameter of the forest may be made. Therefore, this paper exemplifies the accuracy of non-linear regression equation of quadratic function to estimate the AGB and carbon stocks for the tropical lowland Dipterocarp forest of Ayer Hitam forest reserve, Selangor. The main aim of this investigation is to obtain the relationship between biophysical parameter field plots with the remotely-sensed data using nonlinear regression model. The result showed that there is a good relationship between crown projection area (CPA) and carbon stocks (CS) with Pearson Correlation (p < 0.01), the coefficient of correlation (r) is 0.671. The study concluded that the integration of Worldview-3 imagery with the canopy height model (CHM) raster based LiDAR were useful in order to quantify the AGB and carbon stocks for a larger sample area of the lowland Dipterocarp forest.

  8. A new framework for modeling decisions about changing information: The Piecewise Linear Ballistic Accumulator model

    PubMed Central

    Heathcote, Andrew

    2016-01-01

    In the real world, decision making processes must be able to integrate non-stationary information that changes systematically while the decision is in progress. Although theories of decision making have traditionally been applied to paradigms with stationary information, non-stationary stimuli are now of increasing theoretical interest. We use a random-dot motion paradigm along with cognitive modeling to investigate how the decision process is updated when a stimulus changes. Participants viewed a cloud of moving dots, where the motion switched directions midway through some trials, and were asked to determine the direction of motion. Behavioral results revealed a strong delay effect: after presentation of the initial motion direction there is a substantial time delay before the changed motion information is integrated into the decision process. To further investigate the underlying changes in the decision process, we developed a Piecewise Linear Ballistic Accumulator model (PLBA). The PLBA is efficient to simulate, enabling it to be fit to participant choice and response-time distribution data in a hierarchal modeling framework using a non-parametric approximate Bayesian algorithm. Consistent with behavioral results, PLBA fits confirmed the presence of a long delay between presentation and integration of new stimulus information, but did not support increased response caution in reaction to the change. We also found the decision process was not veridical, as symmetric stimulus change had an asymmetric effect on the rate of evidence accumulation. Thus, the perceptual decision process was slow to react to, and underestimated, new contrary motion information. PMID:26760448

  9. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    SciTech Connect

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-15

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest which leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.

  10. Does Imaging Technology Cause Cancer? Debunking the Linear No-Threshold Model of Radiation Carcinogenesis.

    PubMed

    Siegel, Jeffry A; Welsh, James S

    2016-04-01

    In the past several years, there has been a great deal of attention from the popular media focusing on the alleged carcinogenicity of low-dose radiation exposures received by patients undergoing medical imaging studies such as X-rays, computed tomography scans, and nuclear medicine scintigraphy. The media has based its reporting on the plethora of articles published in the scientific literature that claim that there is "no safe dose" of ionizing radiation, while essentially ignoring all the literature demonstrating the opposite point of view. But this reported "scientific" literature in turn bases its estimates of cancer induction on the linear no-threshold hypothesis of radiation carcinogenesis. The use of the linear no-threshold model has yielded hundreds of articles, all of which predict a definite carcinogenic effect of any dose of radiation, regardless of how small. Therefore, hospitals and professional societies have begun campaigns and policies aiming to reduce the use of certain medical imaging studies based on perceived risk:benefit ratio assumptions. However, as they are essentially all based on the linear no-threshold model of radiation carcinogenesis, the risk:benefit ratio models used to calculate the hazards of radiological imaging studies may be grossly inaccurate if the linear no-threshold hypothesis is wrong. Here, we review the myriad inadequacies of the linear no-threshold model and cast doubt on the various studies based on this overly simplistic model.

  11. Non-linear spacecraft component parameters identification based on experimental results and finite element modelling

    NASA Astrophysics Data System (ADS)

    Vismara, S. O.; Ricci, S.; Bellini, M.; Trittoni, L.

    2016-06-01

    The objective of the present paper is to describe a procedure to identify and model the non-linear behaviour of structural elements. The procedure herein applied can be divided into two main steps: the system identification and the finite element model updating. The application of the restoring force surface method as a strategy to characterize and identify localized non-linearities has been investigated. This method, which works in the time domain, has been chosen because it has `built-in' characterization capabilities, it allows a direct non-parametric identification of non-linear single-degree-of-freedom systems and it can easily deal with sine-sweep excitations. Two different application examples are reported. At first, a numerical test case has been carried out to investigate the modelling techniques in the case of non-linear behaviour based on the presence of a free-play in the model. The second example concerns the flap of the Intermediate eXperimental Vehicle that successfully completed its 100-min mission on 11 February 2015. The flap was developed under the responsibility of Thales Alenia Space Italia, the prime contractor, which provided the experimental data needed to accomplish the investigation. The procedure here presented has been applied to the results of modal testing performed on the article. Once the non-linear parameters were identified, they were used to update the finite element model in order to prove its capability of predicting the flap behaviour for different load levels.

  12. Augmenting Visual Analysis in Single-Case Research with Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Davis, Dawn H.; Gagne, Phill; Fredrick, Laura D.; Alberto, Paul A.; Waugh, Rebecca E.; Haardorfer, Regine

    2013-01-01

    The purpose of this article is to demonstrate how hierarchical linear modeling (HLM) can be used to enhance visual analysis of single-case research (SCR) designs. First, the authors demonstrated the use of growth modeling via HLM to augment visual analysis of a sophisticated single-case study. Data were used from a delayed multiple baseline…

  13. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  14. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    ERIC Educational Resources Information Center

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  15. Effects on Predictive Ability of the Linear versus Location Models in Discriminant Analysis.

    ERIC Educational Resources Information Center

    Steele, Maryann E.

    The Mahalanobis distance model was compared with the linear discriminant function model and found to provide very similar results, even when a number of the variables were binary. A group of college freshmen were categorized into two groups: 116 "leavers," students who did not return for the second year, and 269 "returners."…

  16. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  17. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION

    EPA Science Inventory

    Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...

  18. The Use of Linear Models for Determining School Workload and Activity Level.

    ERIC Educational Resources Information Center

    Vicino, Frank L.

    This paper outlines the design and use of two linear models as decision-making tools in a school district. The problem to be solved was the allocation of resources for both clerical and custodial personnel. A solution was desired that could be quantified and documented and objectively serve the needs of the district. A clerical support model was…

  19. Missing Data Treatments at the Second Level of Hierarchical Linear Models

    ERIC Educational Resources Information Center

    St. Clair, Suzanne W.

    2011-01-01

    The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…

  20. USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES

    EPA Science Inventory

    The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...

  1. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    ERIC Educational Resources Information Center

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  2. Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models.

    PubMed

    Xie, Minge; Simpson, Douglas G; Carroll, Raymond J

    2008-01-01

    This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety.

  3. Optimization Routine for Generating Medical Kits for Spaceflight Using the Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Graham, Kimberli; Myers, Jerry; Goodenow, Deb

    2017-01-01

    The Integrated Medical Model (IMM) is a MATLAB model that provides probabilistic assessment of the medical risk associated with human spaceflight missions.Different simulations or profiles can be run in which input conditions regarding both mission characteristics and crew characteristics may vary. For each simulation, the IMM records the total medical events that occur and “treats” each event with resources drawn from import scripts. IMM outputs include Total Medical Events (TME), Crew Health Index (CHI), probability of Evacuation (pEVAC), and probability of Loss of Crew Life (pLOCL).The Crew Health Index is determined by the amount of quality time lost (QTL). Previously, an optimization code was implemented in order to efficiently generate medical kits. The kits were optimized to have the greatest benefit possible, given amass and/or volume constraint. A 6-crew, 14-day lunar mission was chosen for the simulation and run through the IMM for 100,000 trials. A built-in MATLAB solver, mixed-integer linear programming, was used for the optimization routine. Kits were generated in 10% increments ranging from 10%-100% of the benefit constraints. Conditions wheremass alone was minimized, volume alone was minimized, and where mass and volume were minimizedjointly were tested.

  4. Aggregation of LoD 1 building models as an optimization problem

    NASA Astrophysics Data System (ADS)

    Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.

    3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.

  5. A multi-objective model for sustainable recycling of municipal solid waste.

    PubMed

    Mirdar Harijani, Ali; Mansour, Saeed; Karimi, Behrooz

    2017-04-01

    The efficient management of municipal solid waste is a major problem for large and populated cities. In many countries, the majority of municipal solid waste is landfilled or dumped owing to an inefficient waste management system. Therefore, an optimal and sustainable waste management strategy is needed. This study introduces a recycling and disposal network for sustainable utilisation of municipal solid waste. In order to optimise the network, we develop a multi-objective mixed integer linear programming model in which the economic, environmental and social dimensions of sustainability are concurrently balanced. The model is able to: select the best combination of waste treatment facilities; specify the type, location and capacity of waste treatment facilities; determine the allocation of waste to facilities; consider the transportation of waste and distribution of processed products; maximise the profit of the system; minimise the environmental footprint; maximise the social impacts of the system; and eventually generate an optimal and sustainable configuration for municipal solid waste management. The proposed methodology could be applied to any region around the world. Here, the city of Tehran, Iran, is presented as a real case study to show the applicability of the methodology.

  6. Predicting recovery of cognitive function soon after stroke: differential modeling of logarithmic and linear regression.

    PubMed

    Suzuki, Makoto; Sugimura, Yuko; Yamada, Sumio; Omori, Yoshitsugu; Miyamoto, Masaaki; Yamamoto, Jun-ichi

    2013-01-01

    Cognitive disorders in the acute stage of stroke are common and are important independent predictors of adverse outcome in the long term. Despite the impact of cognitive disorders on both patients and their families, it is still difficult to predict the extent or duration of cognitive impairments. The objective of the present study was, therefore, to provide data on predicting the recovery of cognitive function soon after stroke by differential modeling with logarithmic and linear regression. This study included two rounds of data collection comprising 57 stroke patients enrolled in the first round for the purpose of identifying the time course of cognitive recovery in the early-phase group data, and 43 stroke patients in the second round for the purpose of ensuring that the correlation of the early-phase group data applied to the prediction of each individual's degree of cognitive recovery. In the first round, Mini-Mental State Examination (MMSE) scores were assessed 3 times during hospitalization, and the scores were regressed on the logarithm and linear of time. In the second round, calculations of MMSE scores were made for the first two scoring times after admission to tailor the structures of logarithmic and linear regression formulae to fit an individual's degree of functional recovery. The time course of early-phase recovery for cognitive functions resembled both logarithmic and linear functions. However, MMSE scores sampled at two baseline points based on logarithmic regression modeling could estimate prediction of cognitive recovery more accurately than could linear regression modeling (logarithmic modeling, R(2) = 0.676, P<0.0001; linear regression modeling, R(2) = 0.598, P<0.0001). Logarithmic modeling based on MMSE scores could accurately predict the recovery of cognitive function soon after the occurrence of stroke. This logarithmic modeling with mathematical procedures is simple enough to be adopted in daily clinical practice.

  7. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  8. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  9. Vestibular coriolis effect differences modeled with three-dimensional linear-angular interactions.

    PubMed

    Holly, Jan E

    2004-01-01

    The vestibular coriolis (or "cross-coupling") effect is traditionally explained by cross-coupled angular vectors, which, however, do not explain the differences in perceptual disturbance under different acceleration conditions. For example, during head roll tilt in a rotating chair, the magnitude of perceptual disturbance is affected by a number of factors, including acceleration or deceleration of the chair rotation or a zero-g environment. Therefore, it has been suggested that linear-angular interactions play a role. The present research investigated whether these perceptual differences and others involving linear coriolis accelerations could be explained under one common framework: the laws of motion in three dimensions, which include all linear-angular interactions among all six components of motion (three angular and three linear). The results show that the three-dimensional laws of motion predict the differences in perceptual disturbance. No special properties of the vestibular system or nervous system are required. In addition, simulations were performed with angular, linear, and tilt time constants inserted into the model, giving the same predictions. Three-dimensional graphics were used to highlight the manner in which linear-angular interaction causes perceptual disturbance, and a crucial component is the Stretch Factor, which measures the "unexpected" linear component.

  10. Validation of linear elastic model for soft tissue simulation in craniofacial surgery

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2001-05-01

    Physically based soft tissue modeling is a state of the art in computer assisted surgery (CAS). But even such a sophisticated approach has its limits. The biomechanic behavior of soft tissue is highly complex, so that simplified models have to be applied. Under assumption of small deformations, usually applied in soft tissue modeling, soft tissue can be approximately described as a linear elastic continuum. Since there exist efficient techniques for solving linear partial differential equations, the linear elastic model allows comparatively fast calculation of soft tissue deformation and consequently the prediction of a patient's postoperative appearance. However, for the calculation of large deformations, which are not unusual in craniofacial surgery, this approach can implicate substantial error depending on the intensity of the deformation. The monitoring of the linearization error could help to estimate the scope of validity of calculations upon user defined precision. In order to quantify this error one even do not need to know the correct solution, since the linear theory implies the appropriate instruments for error detection in itself.

  11. A LINEAR PROGRAMMING MODEL OF THE GASEOUSDIFFUSION ISOTOPE-SEPARATION PROCESS,

    DTIC Science & Technology

    ISOTOPE SEPARATION, LINEAR PROGRAMMING ), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), NUCLEAR REACTORS, REACTOR FUELS, URANIUM, PURIFICATION

  12. Model predictive control of non-linear systems over networks with data quantization and packet loss.

    PubMed

    Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping

    2015-11-01

    This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method.

  13. Impact of using linear optimization models in dose planning for HDR brachytherapy

    SciTech Connect

    Holm, Aasa; Larsson, Torbjoern; Carlsson Tedgren, Aasa

    2012-02-15

    Purpose: Dose plans generated with optimization models hitherto used in high-dose-rate (HDR) brachytherapy have shown a tendency to yield longer dwell times than manually optimized plans. Concern has been raised for the corresponding undesired hot spots, and various methods to mitigate these have been developed. The hypotheses upon this work is based are (a) that one cause for the long dwell times is the use of objective functions comprising simple linear penalties and (b) that alternative penalties, as these are piecewise linear, would lead to reduced length of individual dwell times. Methods: The characteristics of the linear penalties and the piecewise linear penalties are analyzed mathematically. Experimental comparisons between the two types of penalties are carried out retrospectively for a set of prostate cancer patients. Results: When the two types of penalties are compared, significant changes can be seen in the dwell times, while most dose-volume parameters do not differ significantly. On average, total dwell times were reduced by 4.2%, with a reduction of maximum dwell times by 25%, when the alternative penalties were used. Conclusions: The use of linear penalties in optimization models for HDR brachytherapy is one cause for the undesired long dwell times that arise in mathematically optimized plans. By introducing alternative penalties, a significant reduction in dwell times can be achieved for HDR brachytherapy dose plans. Although various measures for mitigating the long dwell times are already available, the observation that linear penalties contribute to their appearance is of fundamental interest.

  14. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  15. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  16. A linear-time algorithm for Gaussian and non-Gaussian trait evolution models.

    PubMed

    Ho, Lam si Tung; Ané, Cécile

    2014-05-01

    We developed a linear-time algorithm applicable to a large class of trait evolution models, for efficient likelihood calculations and parameter inference on very large trees. Our algorithm solves the traditional computational burden associated with two key terms, namely the determinant of the phylogenetic covariance matrix V and quadratic products involving the inverse of V. Applications include Gaussian models such as Brownian motion-derived models like Pagel's lambda, kappa, delta, and the early-burst model; Ornstein-Uhlenbeck models to account for natural selection with possibly varying selection parameters along the tree; as well as non-Gaussian models such as phylogenetic logistic regression, phylogenetic Poisson regression, and phylogenetic generalized linear mixed models. Outside of phylogenetic regression, our algorithm also applies to phylogenetic principal component analysis, phylogenetic discriminant analysis or phylogenetic prediction. The computational gain opens up new avenues for complex models or extensive resampling procedures on very large trees. We identify the class of models that our algorithm can handle as all models whose covariance matrix has a 3-point structure. We further show that this structure uniquely identifies a rooted tree whose branch lengths parametrize the trait covariance matrix, which acts as a similarity matrix. The new algorithm is implemented in the R package phylolm, including functions for phylogenetic linear regression and phylogenetic logistic regression.

  17. Modelling land-fast sea ice using a linear elastic model

    NASA Astrophysics Data System (ADS)

    Plante, Mathieu; Tremblay, Bruno

    2016-04-01

    Land-fast ice is an important component of the Arctic system, capping the coastal Arctic waters for most of the year and exerting a large influence on ocean-atmosphere heat exchanges. Yet, the accurate representation of land-fast ice in most large-scale sea ice models remains a challenge, part due to the difficult (and sometimes non-physical) parametrisation of ice fracture. In this study, a linear elastic model is developed to investigate the internal stresses induced by the wind forcing on the land-fast ice, modelled as a 2D elastic plate. The model simulates ice fracture by the implementation of a damage coefficient which causes a local reduction in internal stress. This results in a cascade propagation of damage, simulating the ice fracture that determines the position of the land-fast ice edge. The modelled land-fast ice cover is sensitive to the choice of failure criterion. The parametrised cohesion, tensile and compressive strength and the relationship with the land-fast ice stability is discussed. To estimate the large scale mechanical properties of land-fast ice, these results are compared to a set of land-fast ice break up events and ice bridge formations observed in the Siberian Arctic. These events are identified using brightness temperature imagery from the MODIS (Moderate Resolution Imaging Spectroradiometer) Terra and Aqua satellites, from which the position of the flaw lead is identifiable by the opening of polynyi adjacent to the land-fast ice edge. The shape of the land-fast ice before, during and after these events, along with the characteristic scale of the resulting ice floes, are compared to the model results to extrapolate the stress state that corresponds to these observations. The model setting that best reproduce the scale of the observed break up events is used to provide an estimation of the strength of the ice relative to the wind forcing. These results will then be used to investigate the relationship between the ice thickness and the

  18. Non-linear homogenized and heterogeneous FE models for FRCM reinforced masonry walls in diagonal compression

    NASA Astrophysics Data System (ADS)

    Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo

    2016-12-01

    Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.

  19. A single-degree-of-freedom model for non-linear soil amplification

    USGS Publications Warehouse

    Erdik, Mustafa Ozder

    1979-01-01

    For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.

  20. The hedgehog baryon as a variational mean field solution of the spherical linear chiral soliton model

    NASA Astrophysics Data System (ADS)

    Goeke, K.; Urbano, J. N.; Fiolhais, M.; Harvey, M.

    1985-12-01

    We prove that the hedgehog baryon arises as a variational solution of the linear σ-model, if this is restricted to the chiral circle and if the boson Fock-states are described by coherent states and the valence quarks by a product of three identical wave functions each consisting of an orbital s-state multiplied with the most general one-quark spin-flavour configuration in the ud-sector. The opposite is shown to be not true, i.e., the assumption of a hedgehog state in the linear σ-model does not lead to fields which obey the requirements of the chiral circle.

  1. Prediction of Nino 3 sea surface temperatures using linear inverse modeling

    SciTech Connect

    Penland, C.; Magorian, T. )

    1993-06-01

    Linear inverse modeling is used to predict sea surface temperatures (SSTs) in the Nino 3 region. Predictors in three geographical locations are used: the tropical Pacific Ocean, the tropical Pacific and Indian oceans, and the global tropical oceans. Predictions did not depend crucially on any of these three domains, and evidence was found to support the assumption that linear dynamics dominates most of the record. The prediction model performs better when SST anomalies are rapidly evolving than during warm events when large anomalies persist. The rms prediction error at a lead time of 9 months is about half a degree Celsius. 31 refs., 9 figs., 1 tab.

  2. A Linear Regression and Markov Chain Model for the Arabian Horse Registry

    DTIC Science & Technology

    1993-04-01

    as a tax deduction? Yes No T-4367 68 26. Regardless of previous equine tax deductions, do you consider your current horse activities to be... (Mark one...E L T-4367 A Linear Regression and Markov Chain Model For the Arabian Horse Registry Accesion For NTIS CRA&I UT 7 4:iC=D 5 D-IC JA" LI J:13tjlC,3 lO...the Arabian Horse Registry, which needed to forecast its future registration of purebred Arabian horses . A linear regression model was utilized to

  3. A Robust Multiple Correlation Coefficient for the Rank Analysis of Linear Models.

    DTIC Science & Technology

    1983-09-01

    A multiple correlation coefficient is discussed to measure the degree of association between a random variable Y and a set of random variables X sub...approach of analyzing linear models in a regression, prediction context. The population parameter equals the classical multiple correlation ... coefficient if the multivariate normal model holds but would be more robust for departures from this model. Some results are given on the consistency of the sample estimate and on a test for independence. (Author)

  4. NOTE: Estimation of renal scintigraphy parameters using a linear piecewise-continuous model

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Zhang, L.; Koh, T. S.; Shuter, B.

    2003-06-01

    Instead of performing a numerical deconvolution, we propose to use a linear piecewise-continuous model of the renal impulse response function for parametric fitting of renal scintigraphy data, to obtain clinically useful renal parameters. The strengths of the present model are its simplicity and speed of computation, while not compromising on accuracy. Preliminary patient case studies show that the estimated parameters are in good agreement with a more elaborate model.

  5. Kershaw closures for linear transport equations in slab geometry I: Model derivation

    NASA Astrophysics Data System (ADS)

    Schneider, Florian

    2016-10-01

    This paper provides a new class of moment models for linear kinetic equations in slab geometry. These models can be evaluated cheaply while preserving the important realizability property, that is the fact that the underlying closure is non-negative. Several comparisons with the (expensive) state-of-the-art minimum-entropy models are made, showing the similarity in approximation quality of the two classes.

  6. Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN

    NASA Technical Reports Server (NTRS)

    Sheerer, T. J.

    1986-01-01

    The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.

  7. Genetic evaluation of calf and heifer survival in Iranian Holstein cattle using linear and threshold models.

    PubMed

    Forutan, M; Ansari Mahyari, S; Sargolzaei, M

    2015-02-01

    Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection.

  8. Assessing the performance of linear and non-linear soil carbon dynamics models using the Multi-Objective Evolutionary Algorithm Borg-MOEA

    NASA Astrophysics Data System (ADS)

    Ramcharan, A. M.; Kemanian, A.; Richard, T.

    2013-12-01

    The largest terrestrial carbon pool is soil, storing more carbon than present in above ground biomass (Jobbagy and Jackson, 2000). In this context, soil organic carbon has gained attention as a managed sink for atmospheric CO2 emissions. The variety of models that describe soil carbon cycling reflects the relentless effort to characterize the complex nature of soil and the carbon within it. Previous works have laid out the range of mathematical approaches to soil carbon cycling but few have compared model structure performance in diverse agricultural scenarios. As interest in increasing the temporal and spatial scale of models grows, assessing the performance of different model structures is essential to drawing reasonable conclusions from model outputs. This research will address this challenge using the Evolutionary Algorithm Borg-MOEA to optimize the functionality of carbon models in a multi-objective approach to parameter estimation. Model structure performance will be assessed through analysis of multi-objective trade-offs using experimental data from twenty long-term carbon experiments across the globe. Preliminary results show a successful test of this proof of concept using a non-linear soil carbon model structure. Soil carbon dynamics were based on the amount of carbon inputs to the soil and the degree of organic matter saturation of the soil. The degree of organic matter saturation of the soil was correlated with the soil clay content. Six parameters of the non-linear soil organic carbon model were successfully optimized to steady-state conditions using Borg-MOEA and datasets from five agricultural locations in the United States. Given that more than 50% of models rely on linear soil carbon decomposition dynamics, a linear model structure was also optimized and compared to the non-linear case. Results indicate linear dynamics had a significantly lower optimization performance. Results show promise in using the Evolutionary Algorithm Borg-MOEA to assess

  9. Modeling and experimental validation of a linear ultrasonic motor considering rough surface contact

    NASA Astrophysics Data System (ADS)

    Lv, Qibao; Yao, Zhiyuan; Li, Xiang

    2017-04-01

    Linear ultrasonic motor is driven by the interface friction between the stator and the slider. The performance of the motor is significantly affected by the contact state between the stator and slider which depends considerably on the morphology of the contact interface. A novel fiction model is developed to evaluate the output characteristics of a linear ultrasonic motor. The proposed model, where the roughness and plastic deformation of contact surfaces are considered, differs from the previous spring model. Based on the developed model, the effects of surface roughness parameters on motor performance are investigated. The behavior of the force transmission between the stator and the slider is studied to understand the driving mechanism. Furthermore, a comparison between the proposed model and the spring model is made. An experiment is designed to verify the feasibility and effectiveness of this proposed model by comparing the simulation results with the measured one. The results show that the proposed model is more accurate than the spring model. These discussions will be very useful for the improvement of control and the optimal design of linear ultrasonic motor.

  10. Properties of Linear Integral Equations Related to the Six-Vertex Model with Disorder Parameter

    NASA Astrophysics Data System (ADS)

    Boos, Hermann; Göhmann, Frank

    2011-10-01

    One of the key steps in recent work on the correlation functions of the XXZ chain was to regularize the underlying six-vertex model by a disorder parameter α. For the regularized model it was shown that all static correlation functions are polynomials in only two functions. It was further shown that these two functions can be written as contour integrals involving the solutions of a certain type of linear and non-linear integral equations. The linear integral equations depend parametrically on α and generalize linear integral equations known from the study of the bulk thermodynamic properties of the model. In this note we consider the generalized dressed charge and a generalized magnetization density. We express the generalized dressed charge as a linear combination of two quotients of Q-functions, the solutions of Baxter's t-Q-equation. With this result we give a new proof of a lemma on the asymptotics of the generalized magnetization density as a function of the spectral parameter.

  11. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2015-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  12. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward

    2014-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and compared against each other. Results show both models can be tuned to achieve results within 7% of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  13. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    NASA Astrophysics Data System (ADS)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  14. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  15. Linearized aerodynamic and control law models of the X-29A airplane and comparison with flight data

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    1992-01-01

    Flight control system design and analysis for aircraft rely on mathematical models of the vehicle dynamics. In addition to a six degree of freedom nonlinear simulation, the X-29A flight controls group developed a set of programs that calculate linear perturbation models throughout the X-29A flight envelope. The models include the aerodynamics as well as flight control system dynamics and were used for stability, controllability, and handling qualities analysis. These linear models were compared to flight test results to help provide a safe flight envelope expansion. A description is given of the linear models at three flight conditions and two flight control system modes. The models are presented with a level of detail that would allow the reader to reproduce the linear results if desired. Comparison between the response of the linear model and flight measured responses are presented to demonstrate the strengths and weaknesses of the linear models' ability to predict flight dynamics.

  16. Mathematical modelling in engineering: an alternative way to teach Linear Algebra

    NASA Astrophysics Data System (ADS)

    Domínguez-García, S.; García-Planas, M. I.; Taberna, J.

    2016-10-01

    Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic classroom approach in which students modelled real-world problems and turn gain a deeper knowledge of the Linear Algebra subject. Considering that most students are digital natives, we use the e-portfolio as a tool of communication between students and teachers, besides being a good place making the work visible. In this article, we present an overview of the design and implementation of a project-based learning for a Linear Algebra course taught during the 2014-2015 at the 'ETSEIB'of Universitat Politècnica de Catalunya (UPC).

  17. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2016-04-19

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  18. A linearization approach for the model-based analysis of combined aggregate and individual patient data.

    PubMed

    Ravva, Patanjali; Karlsson, Mats O; French, Jonathan L

    2014-04-30

    The application of model-based meta-analysis in drug development has gained prominence recently, particularly for characterizing dose-response relationships and quantifying treatment effect sizes of competitor drugs. The models are typically nonlinear in nature and involve covariates to explain the heterogeneity in summary-level literature (or aggregate data (AD)). Inferring individual patient-level relationships from these nonlinear meta-analysis models leads to aggregation bias. Individual patient-level data (IPD) are indeed required to characterize patient-level relationships but too often this information is limited. Since combined analyses of AD and IPD allow advantage of the information they share to be taken, the models developed for AD must be derived from IPD models; in the case of linear models, the solution is a closed form, while for nonlinear models, closed form solutions do not exist. Here, we propose a linearization method based on a second order Taylor series approximation for fitting models to AD alone or combined AD and IPD. The application of this method is illustrated by an analysis of a continuous landmark endpoint, i.e., change from baseline in HbA1c at week 12, from 18 clinical trials evaluating the effects of DPP-4 inhibitors on hyperglycemia in diabetic patients. The performance of this method is demonstrated by a simulation study where the effects of varying the degree of nonlinearity and of heterogeneity in covariates (as assessed by the ratio of between-trial to within-trial variability) were studied. A dose-response relationship using an Emax model with linear and nonlinear effects of covariates on the emax parameter was used to simulate data. The simulation results showed that when an IPD model is simply used for modeling AD, the bias in the emax parameter estimate increased noticeably with an increasing degree of nonlinearity in the model, with respect to covariates. When using an appropriately derived AD model, the linearization

  19. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  20. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    ERIC Educational Resources Information Center

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  1. Remarks on "Equivalent Linear Logistic Test Models" by Bechger, Verstralen, and Verhelst (2002)

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    2004-01-01

    This paper discusses a new form of specifying and normalizing a Linear Logistic Test Model (LLTM) as suggested by Bechger, Verstralen, and Verhelst ("Psychometrika," 2002). It is shown that there are infinitely many ways to specify the same normalization. Moreover, the relationship between some of their results and equivalent previous…

  2. An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…

  3. Mixed linear model approach adapted for genome-wide association studies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mixed linear model (MLM) methods have proven useful in controlling for population structure and relatedness within genome-wide association studies. However, MLM-based methods can be computationally challenging for large datasets. We report a compression approach, called ‘compressed MLM,’ that decrea...

  4. Classical trajectory versus quantum interference. A linear chain model for the origin of uncertainty broadening

    SciTech Connect

    Tang, Jau

    1996-02-01

    A simple linear chain model, as an alternative to the orthodox Schroedinger approach, is proposed to explain the origin of the uncertainty broadening and to improve our physical insight into the difference between classical and quantum worlds. Quantum interference in space is manifested as a result of fast exchange between adjacent particles of different internal degrees of freedom.

  5. An improved statistical model for linear antenna input impedance in an electrically large cavity.

    SciTech Connect

    Johnson, William Arthur; Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Lee, Kelvin S. H.

    2005-03-01

    This report presents a modification of a previous model for the statistical distribution of linear antenna impedance. With this modification a simple formula is determined which yields accurate results for all ratios of modal spectral width to spacing. It is shown that the reactance formula approaches the known unit Lorentzian in the lossless limit.

  6. Kaon condensation in the linear sigma model at finite density and temperature

    SciTech Connect

    Tran Huu Phat; Nguyen Van Long; Nguyen Tuan Anh; Le Viet Hoa

    2008-11-15

    Basing on the Cornwall-Jackiw-Tomboulis effective action approach we formulate a theoretical formalism for studying kaon condensation in the linear sigma model at finite density and temperature. We derive the renormalized effective potential in the Hartree-Fock approximation, which preserves the Goldstone theorem. This quantity is then used to consider physical properties of kaon matter.

  7. Mathematical Modelling in Engineering: An Alternative Way to Teach Linear Algebra

    ERIC Educational Resources Information Center

    Domínguez-García, S.; García-Planas, M. I.; Taberna, J.

    2016-01-01

    Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic…

  8. What Is Wrong with ANOVA and Multiple Regression? Analyzing Sentence Reading Times with Hierarchical Linear Models

    ERIC Educational Resources Information Center

    Richter, Tobias

    2006-01-01

    Most reading time studies using naturalistic texts yield data sets characterized by a multilevel structure: Sentences (sentence level) are nested within persons (person level). In contrast to analysis of variance and multiple regression techniques, hierarchical linear models take the multilevel structure of reading time data into account. They…

  9. Multidimensional Classification of Examinees Using the Mixture Random Weights Linear Logistic Test Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Wilson, Mark

    2015-01-01

    An essential feature of the linear logistic test model (LLTM) is that item difficulties are explained using item design properties. By taking advantage of this explanatory aspect of the LLTM, in a mixture extension of the LLTM, the meaning of latent classes is specified by how item properties affect item difficulties within each class. To improve…

  10. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  11. Bayesian Analysis for Linearized Multi-Stage Models in Quantal Bioassay.

    ERIC Educational Resources Information Center

    Kuo, Lynn; Cohen, Michael P.

    Bayesian methods for estimating dose response curves in quantal bioassay are studied. A linearized multi-stage model is assumed for the shape of the curves. A Gibbs sampling approach with data augmentation is employed to compute the Bayes estimates. In addition, estimation of the "relative additional risk" and the "risk specific…

  12. Examining Factors Affecting Science Achievement of Hong Kong in PISA 2006 Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Lam, Terence Yuk Ping; Lau, Kwok Chi

    2014-01-01

    This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science…

  13. Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Denson, Nida; Seltzer, Michael H.

    2011-01-01

    The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…

  14. Analyzing Multilevel Data: Comparing Findings from Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2013-01-01

    This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…

  15. A Closer Look at Charter Schools Using Hierarchical Linear Modeling. NCES 2006-460

    ERIC Educational Resources Information Center

    Braun, Henry; Jenkins, Frank; Grigg, Wendy

    2006-01-01

    Charter schools are a relatively new, but fast-growing, phenomenon in American public education. As such, they merit the attention of all parties interested in the education of the nation's youth. The present report comprises two separate analyses. The first is a "combined analysis" in which hierarchical linear models (HLMs) were…

  16. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  17. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  18. Estimate of influenza cases using generalized linear, additive and mixed models.

    PubMed

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2015-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.

  19. Dual pairs of gauged linear sigma models and derived equivalences of Calabi-Yau threefolds

    NASA Astrophysics Data System (ADS)

    Gerhardus, Andreas; Jockers, Hans

    2017-04-01

    In this work we study the phase structure of skew symplectic sigma models, which are a certain class of two-dimensional N =(2 , 2) non-Abelian gauged linear sigma models. At low energies some of them flow to non-linear sigma models with Calabi-Yau target spaces, which emerge from non-Abelian strong coupling dynamics. The observed phase structure results in a non-trivial duality proposal among skew symplectic sigma models and connects non-complete intersection Calabi-Yau threefolds-that are non-birational among another-in a common quantum Kähler moduli space. As a consequence we find non-trivial identifications of spectra of topological B-branes, which from a modern algebraic geometry perspective imply derived equivalences among Calabi-Yau varieties. To further support our proposals, we calculate the two sphere partition function of skew symplectic sigma models to determine geometric invariants, which confirm the anticipated Calabi-Yau threefold phases. We show that the two sphere partition functions of a pair of dual skew symplectic sigma models agree in a non-trivial fashion. To carry out these calculations, we develop a systematic approach to study higher-dimensional Mellin-Barnes type integrals. In particular, these techniques admit the evaluation of two sphere partition functions for gauged linear sigma models with higher rank gauge groups, but are applicable in other contexts as well.

  20. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    PubMed

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation.

  1. Model Order and Identifiability of Non-Linear Biological Systems in Stable Oscillation.

    PubMed

    Wigren, Torbjörn

    2015-01-01

    The paper presents a theoretical result that clarifies when it is at all possible to determine the nonlinear dynamic equations of a biological system in stable oscillation, from measured data. As it turns out the minimal order needed for this is dependent on the minimal dimension in which the stable orbit of the system does not intersect itself. This is illustrated with a simulated fourth order Hodgkin-Huxley spiking neuron model, which is identified using a non-linear second order differential equation model. The simulated result illustrates that the underlying higher order model of the spiking neuron cannot be uniquely determined given only the periodic measured data. The result of the paper is of general validity when the dynamics of biological systems in stable oscillation is identified, and illustrates the need to carefully address non-linear identifiability aspects when validating models based on periodic data.

  2. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  3. A linear dispersion relation for the hybrid kinetic-ion/fluid-electron model of plasma physics

    NASA Astrophysics Data System (ADS)

    Told, D.; Cookmeyer, J.; Astfalk, P.; Jenko, F.

    2016-07-01

    A dispersion relation for a commonly used hybrid model of plasma physics is developed, which combines fully kinetic ions and a massless-electron fluid description. Although this model and variations of it have been used to describe plasma phenomena for about 40 years, to date there exists no general dispersion relation to describe the linear wave physics contained in the model. Previous efforts along these lines are extended here to retain arbitrary wave propagation angles, temperature anisotropy effects, as well as additional terms in the generalized Ohm’s law which determines the electric field. A numerical solver for the dispersion relation is developed, and linear wave physics is benchmarked against solutions of a full Vlasov-Maxwell dispersion relation solver. This work opens the door to a more accurate interpretation of existing and future wave and turbulence simulations using this type of hybrid model.

  4. Linear orthotropic viscoelasticity model for fiber reinforced thermoplastic material based on Prony series

    NASA Astrophysics Data System (ADS)

    Endo, Vitor Takashi; de Carvalho Pereira, José Carlos

    2016-09-01

    Material properties description and understanding are essential aspects when computational solid mechanics is applied to product development. In order to promote injected fiber reinforced thermoplastic materials for structural applications, it is very relevant to develop material characterization procedures, considering mechanical properties variation in terms of fiber orientation and loading time. Therefore, a methodology considering sample manufacturing, mechanical tests and data treatment is described in this study. The mathematical representation of the material properties was solved by a linear viscoelastic constitutive model described by Prony series, which was properly adapted to orthotropic materials. Due to the large number of proposed constitutive model coefficients, a parameter identification method was employed to define mathematical functions. This procedure promoted good correlation among experimental tests, and analytical and numerical creep models. Such results encourage the use of numerical simulations for the development of structural components with the proposed linear viscoelastic orthotropic constitutive model. A case study was presented to illustrate an industrial application of proposed methodology.

  5. Efficient multivariate linear mixed model algorithms for genome-wide association studies.

    PubMed

    Zhou, Xiang; Stephens, Matthew

    2014-04-01

    Multivariate linear mixed models (mvLMMs) are powerful tools for testing associations between single-nucleotide polymorphisms and multiple correlated phenotypes while controlling for population stratification in genome-wide association studies. We present efficient algorithms in the genome-wide efficient mixed model association (GEMMA) software for fitting mvLMMs and computing likelihood ratio tests. These algorithms offer improved computation speed, power and P-value calibration over existing methods, and can deal with more than two phenotypes.

  6. A wavelet-linear genetic programming model for sodium (Na+) concentration forecasting in rivers

    NASA Astrophysics Data System (ADS)

    Ravansalar, Masoud; Rajaee, Taher; Zounemat-Kermani, Mohammad

    2016-06-01

    The prediction of water quality parameters in water resources such as rivers is of importance issue that needs to be considered in better management of irrigation systems and water supplies. In this respect, this study proposes a new hybrid wavelet-linear genetic programming (WLGP) model for prediction of monthly sodium (Na+) concentration. The 23-year monthly data used in this study, were measured from the Asi River at the Demirköprü gauging station located in Antakya, Turkey. At first, the measured discharge (Q) and Na+ datasets are initially decomposed into several sub-series using discrete wavelet transform (DWT). Then, these new sub-series are imposed to the ad hoc linear genetic programming (LGP) model as input patterns to predict monthly Na+ one month ahead. The results of the new proposed WLGP model are compared with LGP, WANN and ANN models. Comparison of the models represents the superiority of the WLGP model over the LGP, WANN and ANN models such that the Nash-Sutcliffe efficiencies (NSE) for WLGP, WANN, LGP and ANN models were 0.984, 0.904, 0.484 and 0.351, respectively. The achieved results even points to the superiority of the single LGP model than the ANN model. Continuously, the capability of the proposed WLGP model in terms of prediction of the Na+ peak values is also presented in this study.

  7. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  8. Extraction of battery parameters using a multi-objective genetic algorithm with a non-linear circuit model

    NASA Astrophysics Data System (ADS)

    Malik, Aimun; Zhang, Zheming; Agarwal, Ramesh K.

    2014-08-01

    There is need for a battery model that can accurately describe the battery performance for an electrical system, such as the electric drive train of electric vehicles. In this paper, both linear and non-linear equivalent circuit models (ECM) are employed as a means of extracting the battery parameters that can be used to model the performance of a battery. The linear and non-linear equivalent circuit models differ in the numbers of capacitance and resistance; the non-linear model has an added circuit; however their numerical characteristics are equivalent. A multi-objective genetic algorithm is employed to accurately extract the values of the battery model parameters. The battery model parameters are obtained for several existing industrial batteries as well as for two recently proposed high performance batteries. Once the model parameters are optimally determined, the results demonstrate that both linear and non-linear equivalent circuit models can predict with acceptable accuracy the performance of various batteries of different sizes, characteristics, capacities, and materials. However, the comparisons of results with catalog and experimental data shows that the predictions of results using the non-linear equivalent circuit model are slightly better than those predicted by the linear model, calculating voltages that are closer to the manufacturers' values.

  9. Solving inverse problems with piecewise linear estimators: from Gaussian mixture models to structured sparsity.

    PubMed

    Yu, Guoshen; Sapiro, Guillermo; Mallat, Stéphane

    2012-05-01

    A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.

  10. Integrating real-time and manual monitored data to predict hillslope soil moisture dynamics with high spatio-temporal resolution using linear and non-linear models

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Zhou, Zhiwen; Duncan, Emily W.; Lv, Ligang; Liao, Kaihua; Feng, Huihui

    2017-02-01

    Spatio-temporal variability of soil moisture (θ) is a challenge that remains to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time θ monitoring methods. This restricted the comprehensive and intensive examination of θ dynamics. In this study, we integrated the manual and real-time monitored data to depict the hillslope θ dynamics with good spatial coverage and temporal resolution. Linear (stepwise multiple linear regression-SMLR) and non-linear (support vector machines-SVM) models were used to predict θ at 39 manual sites (collected 1-2 times per month) with θ collected at three real-time monitoring sites (collected every 5 mins). By comparing the accuracies of SMLR and SVM for each depth and manual site, an optimal prediction model was then determined at this depth of this site. Results showed that θ at the 39 manual sites can be reliably predicted (root mean square errors <0.028 m3 m-3) using both SMLR and SVM. The linear or non-linear relationship between θ at each manual site and at the three real-time monitoring sites was the main reason for choosing SMLR or SVM as the optimal prediction model. The subsurface flow dynamics was an important factor that determined whether the relationship was linear or non-linear. Depth to bedrock, elevation, topographic wetness index, profile curvature, and θ temporal stability influenced the selection of prediction model since they were related to the subsurface soil water distribution and movement. Using this approach, hillslope θ spatial distributions at un-sampled times and dates can be predicted. Missing information of hillslope θ dynamics can be acquired successfully.

  11. Piecewise-homogeneous model for electron side injection into linear plasma waves

    NASA Astrophysics Data System (ADS)

    Golovanov, A. A.; Kostyukov, I. Yu.

    2016-09-01

    An analytical piecewise-homogeneous model for electron side injection into linear plasma waves is developed. The dynamics of transverse betatron oscillations are studied. Based on the characteristics of the transversal motion the longitudinal motion of electrons is described. The electron parameters for which the electron trapping and subsequent acceleration are possible are estimated. The analytical results are verified by numerical simulations in the scope of the piecewise-homogeneous model. The results predicted by this model are also compared to the results given by a more realistic inhomogeneous model.

  12. Cogging force rejection method of linear motor based on internal model principle

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Chen, Zhenyu; Yang, Tianbo

    2015-02-01

    The cogging force disturbance of linear motor is one of the main factors affecting the positioning accuracy of ultraprecision moving platform. And this drawback could not be completely overcome by improving the design of motor body, such as location modification of permanent magnet array, or optimization design of the shape of teeth-slot. So the active compensation algorithms become prevalent in cogging force rejection area. This paper proposed a control structure based on internal mode principle to attenuate the cogging force of linear motor which deteriorated the accuracy of position, and this structure could make tracking and anti-disturbing performance of close-loop designed respectively. In the first place, the cogging force was seen as the intrinsic property of linear motor and its model constituting controlled object with motor ontology model was obtained by data driven recursive identification method. Then, a control structure was designed to accommodate tracking and anti-interference ability separately by using internal model principle. Finally, the proposed method was verified in a long stroke moving platform driven by linear motor. The experiment results show that, by employing this control strategy, the positioning error caused by cogging force was decreased by 70%.

  13. Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hogue, T. S.; Hay, L.

    2015-12-01

    This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.

  14. Power of Latent Growth Modeling for Detecting Linear Growth: Number of Measurements and Comparison with Other Analytic Approaches

    ERIC Educational Resources Information Center

    Fan, Xitao; Fan, Xiaotao

    2005-01-01

    The authors investigated 2 issues concerning the power of latent growth modeling (LGM) in detecting linear growth: the effect of the number of repeated measurements on LGM's power in detecting linear growth and the comparison between LGM and some other approaches in terms of power for detecting linear growth. A Monte Carlo simulation design was…

  15. Modeling and Simulating of Single Side Short Stator Linear Induction Motor with the End Effect

    NASA Astrophysics Data System (ADS)

    Hamzehbahmani, Hamed

    2011-09-01

    Linear induction motors are under development for a variety of demanding applications including high speed ground transportation and specific industrial applications. These applications require machines that can produce large forces, operate at high speeds, and can be controlled precisely to meet performance requirements. The design and implementation of these systems require fast and accurate techniques for performing system simulation and control system design. In this paper, a mathematical model for a single side short stator linear induction motor with a consideration of the end effects is presented; and to study the dynamic performance of this linear motor, MATLAB/SIMULINK based simulations are carried out, and finally, the experimental results are compared to simulation results.

  16. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    NASA Astrophysics Data System (ADS)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  17. Universality of Effective Medium and Random Resistor Network models for disorder-induced linear unsaturating magnetoresistance

    NASA Astrophysics Data System (ADS)

    Lara, Silvia; Lai, Ying Tong; Love, Cameron; Ramakrishnan, Navneeth; Adam, Shaffique

    In recent years, the Effective Medium Theory (EMT) and the Random Resistor Network (RRN) have been separately used to explain disorder induced magnetoresistance that is quadratic at low fields and linear at high fields. We demonstrate that the quadratic and linear coefficients of the magnetoresistance and the transition point from the quadratic to the linear regime depend only on the inhomogeneous carrier density profile. We use this to find a mapping between the two models using dimensionless parameters that determine the magnetoresistance and show numerically that they belong to the same universality class. This work is supported by the Singapore National Research Foundation (NRF-NRFF2012-01) and the Singapore Ministry of Education and Yale-NUS College through Grant Number R-607-265-01312.

  18. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  19. Enhanced representations of lithium-ion batteries in power systems models and their effect on the valuation of energy arbitrage applications

    NASA Astrophysics Data System (ADS)

    Sakti, Apurba; Gallagher, Kevin G.; Sepulveda, Nestor; Uckun, Canan; Vergara, Claudio; de Sisternes, Fernando J.; Dees, Dennis W.; Botterud, Audun

    2017-02-01

    We develop three novel enhanced mixed integer-linear representations of the power limit of the battery and its efficiency as a function of the charge and discharge power and the state of charge of the battery, which can be directly implemented in large-scale power systems models and solved with commercial optimization solvers. Using these battery representations, we conduct a techno-economic analysis of the performance of a 10 MWh lithium-ion battery system testing the effect of a 5-min vs. a 60-min price signal on profits using real time prices from a selected node in the MISO electricity market. Results show that models of lithium-ion batteries where the power limits and efficiency are held constant overestimate profits by 10% compared to those obtained from an enhanced representation that more closely matches the real behavior of the battery. When the battery system is exposed to a 5-min price signal, the energy arbitrage profitability improves by 60% compared to that from hourly price exposure. These results indicate that a more accurate representation of li-ion batteries as well as the market rules that govern the frequency of electricity prices can play a major role on the estimation of the value of battery technologies for power grid applications.

  20. Lattice model of linear telechelic polymer melts. II. Influence of chain stiffness on basic thermodynamic properties

    SciTech Connect

    Xu, Wen-Sheng; Freed, Karl F.

    2015-07-14

    The lattice cluster theory (LCT) for semiflexible linear telechelic melts, developed in Paper I, is applied to examine the influence of chain stiffness on the average degree of self-assembly and the basic thermodynamic properties of linear telechelic polymer melts. Our calculations imply that chain stiffness promotes self-assembly of linear telechelic polymer melts that assemble on cooling when either polymer volume fraction ϕ or temperature T is high, but opposes self-assembly when both ϕ and T are sufficiently low. This allows us to identify a boundary line in the ϕ-T plane that separates two regions of qualitatively different influence of chain stiffness on self-assembly. The enthalpy and entropy of self-assembly are usually treated as adjustable parameters in classical Flory-Huggins type theories for the equilibrium self-assembly of polymers, but they are demonstrated here to strongly depend on chain stiffness. Moreover, illustrative calculations for the dependence of the entropy density of linear telechelic polymer melts on chain stiffness demonstrate the importance of including semiflexibility within the LCT when exploring the nature of glass formation in models of linear telechelic polymer melts.