Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
A Mixed Integer Linear Program for Airport Departure Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Jung, Yoon Chul
2009-01-01
Aircraft departing from an airport are subject to numerous constraints while scheduling departure times. These constraints include wake-separation constraints for successive departures, miles-in-trail separation for aircraft bound for the same departure fixes, and time-window or prioritization constraints for individual flights. Besides these, emissions as well as increased fuel consumption due to inefficient scheduling need to be included. Addressing all the above constraints in a single framework while allowing for resequencing of the aircraft using runway queues is critical to the implementation of the Next Generation Air Transport System (NextGen) concepts. Prior work on airport departure scheduling has addressed some of the above. However, existing methods use pre-determined runway queues, and schedule aircraft from these departure queues. The source of such pre-determined queues is not explicit, and could potentially be a subjective controller input. Determining runway queues and scheduling within the same framework would potentially result in better scheduling. This paper presents a mixed integer linear program (MILP) for the departure-scheduling problem. The program takes as input the incoming sequence of aircraft for departure from a runway, along with their earliest departure times and an optional prioritization scheme based on time-window of departure for each aircraft. The program then assigns these aircraft to the available departure queues and schedules departure times, explicitly considering wake separation and departure fix restrictions to minimize total delay for all aircraft. The approach is generalized and can be used in a variety of situations, and allows for aircraft prioritization based on operational as well as environmental considerations. We present the MILP in the paper, along with benefits over the first-come-first-serve (FCFS) scheme for numerous randomized problems based on real-world settings. The MILP results in substantially reduced
Mixed-Integer Conic Linear Programming: Challenges and Perspectives
2013-10-01
The novel DCCs for MISOCO may be used in branch- and-cut algorithms when solving MISOCO problems. The experimental software CICLO was developed to...perform limited, but rigorous computational experiments. The CICLO solver utilizes continuous SOCO solvers, MOSEK, CPLES or SeDuMi, builds on the open...submitted Fall 2013. Software: 1. CICLO : Integer conic linear optimization package. Authors: J.C. Góez, T.K. Ralphs, Y. Fu, and T. Terlaky
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.
Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm
2015-01-01
To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-06-02
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments.
Modeling Road Vulnerability to Snow Using Mixed Integer Optimization
Rodriguez, Tony K; Omitaomu, Olufemi A; Ostrowski, James A; Bhaduri, Budhendra L
2017-01-01
As the number and severity of snowfall events continue to grow, the need to intelligently direct road maintenance during these snowfall events will also grow. In several locations, local governments lack the resources to completely treat all roadways during snow events. Furthermore, some governments utilize only traffic data to determine which roads should be treated. As a result, many schools, businesses, and government offices must be unnecessarily closed, which directly impacts the social, educational, and economic well-being of citizens and institutions. In this work, we propose a mixed integer programming formulation to optimally allocate resources to manage snowfall on roads using meteorological, geographical, and environmental parameters. Additionally, we evaluate the impacts of an increase in budget for winter road maintenance on snow control resources.
Li, Y P; Huang, G H
2006-11-01
In this study, an interval-parameter two-stage mixed integer linear programming (ITMILP) model is developed for supporting long-term planning of waste management activities in the City of Regina. In the ITMILP, both two-stage stochastic programming and interval linear programming are introduced into a general mixed integer linear programming framework. Uncertainties expressed as not only probability density functions but also discrete intervals can be reflected. The model can help tackle the dynamic, interactive and uncertain characteristics of the solid waste management system in the City, and can address issues concerning plans for cost-effective waste diversion and landfill prolongation. Three scenarios are considered based on different waste management policies. The results indicate that reasonable solutions have been generated. They are valuable for supporting the adjustment or justification of the existing waste flow allocation patterns, the long-term capacity planning of the City's waste management system, and the formulation of local policies and regulations regarding waste generation and management.
A mixed integer program to model spatial wildfire behavior and suppression placement decisions
Erin J. Belval; Yu Wei; Michael. Bevers
2015-01-01
Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...
On the solution of mixed-integer nonlinear programming models for computer aided molecular design.
Ostrovsky, Guennadi M; Achenie, Luke E K; Sinha, Manish
2002-11-01
This paper addresses the efficient solution of computer aided molecular design (CAMD) problems, which have been posed as mixed-integer nonlinear programming models. The models of interest are those in which the number of linear constraints far exceeds the number of nonlinear constraints, and with most variables participating in the nonconvex terms. As a result global optimization methods are needed. A branch-and-bound algorithm (BB) is proposed that is specifically tailored to solving such problems. In a conventional BB algorithm, branching is performed on all the search variables that appear in the nonlinear terms. This translates to a large number of node traversals. To overcome this problem, we have proposed a new strategy for branching on a set of linear branchingfunctions, which depend linearly on the search variables. This leads to a significant reduction in the dimensionality of the search space. The construction of linear underestimators for a class of functions is also presented. The CAMD problem that is considered is the design of optimal solvents to be used as cleaning agents in lithographic printing.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
A mixed integer bi-level DEA model for bank branch performance evaluation by Stackelberg approach
NASA Astrophysics Data System (ADS)
Shafiee, Morteza; Lotfi, Farhad Hosseinzadeh; Saleh, Hilda; Ghaderi, Mehdi
2016-11-01
One of the most complicated decision making problems for managers is the evaluation of bank performance, which involves various criteria. There are many studies about bank efficiency evaluation by network DEA in the literature review. These studies do not focus on multi-level network. Wu (Eur J Oper Res 207:856-864, 2010) proposed a bi-level structure for cost efficiency at the first time. In this model, multi-level programming and cost efficiency were used. He used a nonlinear programming to solve the model. In this paper, we have focused on multi-level structure and proposed a bi-level DEA model. We then used a liner programming to solve our model. In other hand, we significantly improved the way to achieve the optimum solution in comparison with the work by Wu (2010) by converting the NP-hard nonlinear programing into a mixed integer linear programming. This study uses a bi-level programming data envelopment analysis model that embodies internal structure with Stackelberg-game relationships to evaluate the performance of banking chain. The perspective of decentralized decisions is taken in this paper to cope with complex interactions in banking chain. The results derived from bi-level programming DEA can provide valuable insights and detailed information for managers to help them evaluate the performance of the banking chain as a whole using Stackelberg-game relationships. Finally, this model was applied in the Iranian bank to evaluate cost efficiency.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin
2014-09-01
Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.
PySP : modeling and solving stochastic mixed-integer programs in Python.
Woodruff, David L.; Watson, Jean-Paul
2010-08-01
Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.
NASA Astrophysics Data System (ADS)
Sakakibara, Kazutoshi; Tian, Yajie; Nishikawa, Ikuko
We discuss the planning of transportation by trucks over a multi-day period. Each truck collects loads from suppliers and delivers them to assembly plants or a truck terminal. By exploiting the truck terminal as a temporal storage, we aim to increase the load ratio of each truck and to minimize the lead time for transportation. In this paper, we show a mixed integer programming model which represents each product explicitly, and discuss the decomposition of the problem into a problem of delivery and storage, and a problem of vehicle routing. Based on this model, we propose a relax-and-fix type heuristic in which decision variables are fixed one by one by mathematical programming techniques such as branch-and-bound methods.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/.
DRIESSEN,BRIAN; SADEGH,NADER
2000-04-25
This work presents a method of finding near global optima to minimum-time trajectory generation problem for systems that would be linear if it were not for the presence of Coloumb friction. The required final state of the system is assumed to be maintainable by the system, and the input bounds are assumed to be large enough so that they can overcome the maximum static Coloumb friction force. Other than the previous work for generating minimum-time trajectories for non redundant robotic manipulators for which the path in joint space is already specified, this work represents, to the best of the authors' knowledge, the first approach for generating near global optima for minimum-time problems involving a nonlinear class of dynamic systems. The reason the optima generated are near global optima instead of exactly global optima is due to a discrete-time approximation of the system (which is usually used anyway to simulate such a system numerically). The method closely resembles previous methods for generating minimum-time trajectories for linear systems, where the core operation is the solution of a Phase I linear programming problem. For the nonlinear systems considered herein, the core operation is instead the solution of a mixed integer linear programming problem.
Mixed integer programming model for optimizing the layout of an ICU vehicle
2009-01-01
Background This paper presents a Mixed Integer Programming (MIP) model for designing the layout of the Intensive Care Units' (ICUs) patient care space. In particular, this MIP model was developed for optimizing the layout for materials to be used in interventions. This work was developed within the framework of a joint project between the Madrid Technical Unverstity and the Medical Emergency Services of the Madrid Regional Government (SUMMA 112). Methods The first task was to identify the relevant information to define the characteristics of the new vehicles and, in particular, to obtain a satisfactory interior layout to locate all the necessary materials. This information was gathered from health workers related to ICUs. With that information an optimization model was developed in order to obtain a solution. From the MIP model, a first solution was obtained, consisting of a grid to locate the different materials needed for the ICUs. The outcome from the MIP model was discussed with health workers to tune the solution, and after slightly altering that solution to meet some requirements that had not been included in the mathematical model, the eventual solution was approved by the persons responsible for specifying the characteristics of the new vehicles. According to the opinion stated by the SUMMA 112's medical group responsible for improving the ambulances (the so-called "coaching group"), the outcome was highly satisfactory. Indeed, the final design served as a basis to draw up the requirements of a public tender. Results As a result from solving the Optimization model, a grid was obtained to locate the different necessary materials for the ICUs. This grid had to be slightly altered to meet some requirements that had not been included in the mathematical model. The results were discussed with the persons responsible for specifying the characteristics of the new vehicles. Conclusion The outcome was highly satisfactory. Indeed, the final design served as a basis
Mixed integer programming model for optimizing the layout of an ICU vehicle.
Alejo, Javier Sánchez; Martín, Modoaldo Garrido; Ortega-Mier, Miguel; García-Sánchez, Alvaro
2009-12-08
This paper presents a Mixed Integer Programming (MIP) model for designing the layout of the Intensive Care Units' (ICUs) patient care space. In particular, this MIP model was developed for optimizing the layout for materials to be used in interventions. This work was developed within the framework of a joint project between the Madrid Technical Unverstity and the Medical Emergency Services of the Madrid Regional Government (SUMMA 112). The first task was to identify the relevant information to define the characteristics of the new vehicles and, in particular, to obtain a satisfactory interior layout to locate all the necessary materials. This information was gathered from health workers related to ICUs. With that information an optimization model was developed in order to obtain a solution. From the MIP model, a first solution was obtained, consisting of a grid to locate the different materials needed for the ICUs. The outcome from the MIP model was discussed with health workers to tune the solution, and after slightly altering that solution to meet some requirements that had not been included in the mathematical model, the eventual solution was approved by the persons responsible for specifying the characteristics of the new vehicles. According to the opinion stated by the SUMMA 112's medical group responsible for improving the ambulances (the so-called "coaching group"), the outcome was highly satisfactory. Indeed, the final design served as a basis to draw up the requirements of a public tender. As a result from solving the Optimization model, a grid was obtained to locate the different necessary materials for the ICUs. This grid had to be slightly altered to meet some requirements that had not been included in the mathematical model. The results were discussed with the persons responsible for specifying the characteristics of the new vehicles. The outcome was highly satisfactory. Indeed, the final design served as a basis to draw up the requirements of a public
Models and Algorithms Involving Very Large Scale Stochastic Mixed-Integer Programs
2011-02-28
give rise to a non - convex and discontinuous recourse function that may be difficult to optimize . As a result of this project, there have been... convex , the master problem in (3.1.6)-(3.1.9) is a non - convex mixed-integer program, and as indicated in [C.1], this approach is not scalable without...the first stage would result in a Benders’ master program which is non - convex , leading to a problem that is not any easier than (3.1.5). Nevertheless
NASA Astrophysics Data System (ADS)
Onoyama, Takashi; Kubota, Sen; Maekawa, Takuya; Komoda, Norihisa
Adequate response performance is required for the planning of a cooperative logistic network covering multiple enterprises, because this process needs a human expert's evaluation from many aspects. To satisfy this requirement, we propose an accurate model based on mixed integer programming for optimizing cooperative logistics networks where “round transportation” exists together with “depot transportation” including lower limit constraints of loading ratio for round transportation vehicles. Furthermore, to achieve interactive response performance, a dummy load is introduced into the model instead of integer variables. The experimental result shows the proposed method obtains an accurate solution within interactive response time.
MISO - Mixed Integer Surrogate Optimization
Mueller, Juliane
2016-01-20
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
NASA Astrophysics Data System (ADS)
Irmeilyana, Puspita, Fitri Maya; Indrawati
2016-02-01
The pricing for wireless networks is developed by considering linearity factors, elasticity price and price factors. Mixed Integer Nonlinear Programming of wireless pricing model is proposed as the nonlinear programming problem that can be solved optimally using LINGO 13.0. The solutions are expected to give some information about the connections between the acceptance factor and the price. Previous model worked on the model that focuses on bandwidth as the QoS attribute. The models attempt to maximize the total price for a connection based on QoS parameter. The QoS attributes used will be the bandwidth and the end to end delay that affect the traffic. The maximum goal to maximum price is achieved when the provider determine the requirement for the increment or decrement of price change due to QoS change and amount of QoS value.
Winebrake, James J; Corbett, James J; Wang, Chengfeng; Farrell, Alexander E; Woods, Pippa
2005-04-01
Emissions from passenger ferries operating in urban harbors may contribute significantly to emissions inventories and commuter exposure to air pollution. In particular, ferries are problematic because of high emissions of oxides of nitrogen (NOx) and particulate matter (PM) from primarily unregulated diesel engines. This paper explores technical solutions to reduce pollution from passenger ferries operating in the New York-New Jersey Harbor. The paper discusses and demonstrates a mixed-integer, non-linear programming model used to identify optimal control strategies for meeting NOx and PM reduction targets for 45 privately owned commuter ferries in the harbor. Results from the model can be used by policy-makers to craft programs aimed at achieving least-cost reduction targets.
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.
2015-01-01
Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2015-09-15
Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.
A Mixed Integer Programming Model for Improving Theater Distribution Force Flow Analysis
2013-03-01
the introduction to LINGO in OPER 510. Next, I wish to thank LINDO Systems, particularly Kevin Cunningham, for software assistance with LINGO . I...viii Appendix A. LINGO 13 Settings File Contents .............................................................. 79 Appendix B. Additional Model...optimization software LINGO 13 (Lindo Systems Inc, 2012). A Decision Support System was built in the Excel environment where the user uploads a
How to Beat Flappy Bird: A Mixed-Integer Model Predictive Control Approach
NASA Astrophysics Data System (ADS)
Piper, Matthew
Flappy Bird is a mobile game that involves tapping the screen to navigate a bird through a gap between pairs of vertical pipes. When the bird passes through the gap, the score increments by one and the game ends when the bird hits the floor or a pipe. Surprisingly, Flappy Bird is a very difficult game and scores in single digits are not uncommon even after extensive practice. In this paper, we create three controllers to play the game autonomously. The controllers are: (1) a manually tuned controller that flaps the bird based on a vertical set point condition; (2) an optimization-based controller that plans and executes an optimal path between consecutive pipes; (3) a model-based predictive controller (MPC). Our results showed that on average, the optimization-based controller scored highest, followed closely by the MPC, while the manually tuned controller scored the least. A key insight was that choosing a planning horizon slightly beyond consecutive pipes was critical for achieving high scores. The average computation time per iteration for the MPC was half that of optimization-based controller but the worst case time (maximum time) per iteration for the MPC was thrice that of optimization-based controller. The success of the optimization based controller was due to the intuitive tuning of the terminal position and velocity constraints while for the MPC the important parameters were the prediction and control horizon. The MPC was straightforward to tune compared to the other two controllers. Our conclusion is that MPC provides the best compromise between performance and computation speed without requiring elaborate tuning.
NASA Astrophysics Data System (ADS)
Guo, P.; Huang, G. H.; Li, Y. P.
2010-01-01
In this study, an inexact fuzzy-chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is developed for flood diversion planning under multiple uncertainties. A concept of the distribution with fuzzy boundary interval probability is defined to address multiple uncertainties expressed as integration of intervals, fuzzy sets and probability distributions. IFCTIP integrates the inexact programming, two-stage stochastic programming, integer programming and fuzzy-stochastic programming within a general optimization framework. IFCTIP incorporates the pre-regulated water-diversion policies directly into its optimization process to analyze various policy scenarios; each scenario has different economic penalty when the promised targets are violated. More importantly, it can facilitate dynamic programming for decisions of capacity-expansion planning under fuzzy-stochastic conditions. IFCTIP is applied to a flood management system. Solutions from IFCTIP provide desired flood diversion plans with a minimized system cost and a maximized safety level. The results indicate that reasonable solutions are generated for objective function values and decision variables, thus a number of decision alternatives can be generated under different levels of flood flows.
Mixed integer evolution strategies for parameter optimization.
Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C
2013-01-01
Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
NASA Astrophysics Data System (ADS)
Masrour, R.; Jabar, A.; Bahmad, L.; Hamedoun, M.; Benyoussef, A.
2017-01-01
In this paper, we study the magnetic properties of ferrimagnetic mixed spins with integer σ = 2 and half-integer S = 7 / 2 in a Blume-Capel model, using Monte Carlo simulations. The considered Hamiltonian includes the first nearest-neighbors and the exchange coupling interactions on the two sub-lattices. The effect of these coupling exchange interactions, in the presence of both the external magnetic field and the crystal field, are studied. The magnetizations and the corresponding susceptibilities are presented and discussed. Finally, we have interpreted the behaviors of the magnetic hysteresis of this model.
Ko, Andi Setiady; Chang, Ni-Bin
2008-07-01
Energy supply and use is of fundamental importance to society. Although the interactions between energy and environment were originally local in character, they have now widened to cover regional and global issues, such as acid rain and the greenhouse effect. It is for this reason that there is a need for covering the direct and indirect economic and environmental impacts of energy acquisition, transport, production and use. In this paper, particular attention is directed to ways of resolving conflict between economic and environmental goals by encouraging a power plant to consider co-firing biomass and refuse-derived fuel (RDF) with coal simultaneously. It aims at reducing the emission level of sulfur dioxide (SO(2)) in an uncertain environment, using the power plant in Michigan City, Indiana as an example. To assess the uncertainty by a comparative way both deterministic and grey nonlinear mixed integer programming (MIP) models were developed to minimize the net operating cost with respect to possible fuel combinations. It aims at generating the optimal portfolio of alternative fuels while maintaining the same electricity generation simultaneously. To ease the solution procedure stepwise relaxation algorithm was developed for solving the grey nonlinear MIP model. Breakeven alternative fuel value can be identified in the post-optimization stage for decision-making. Research findings show that the inclusion of RDF does not exhibit comparative advantage in terms of the net cost, albeit relatively lower air pollution impact. Yet it can be sustained by a charge system, subsidy program, or emission credit as the price of coal increases over time.
1989-12-01
profiles pay[] hourly wage rate for operators on machine m */ FILE * infp ,*outfpl,*outfp2,*outfp3,*outfp4,*outfp5,*outfp6,*fopen(; main () { /* open I/O...files and assign pointers */ infp =fopen("GMFA.DAT","r"); outfp5=fopen( "XL5.DAT’","w"); /* read in input stream data from "GHFA.DAT" * rddstrm...and closing the files */ /* * fprintf(outfp4,"$ MAXIMIZE IITWSTS\
ERIC Educational Resources Information Center
Han, Kyung T.; Rudner, Lawrence M.
2014-01-01
This study uses mixed integer quadratic programming (MIQP) to construct multiple highly equivalent item pools simultaneously, and compares the results from mixed integer programming (MIP). Three different MIP/MIQP models were implemented and evaluated using real CAT item pool data with 23 different content areas and a goal of equal information…
Optimal Allocation of Static Var Compensator via Mixed Integer Conic Programming
Zhang, Xiaohu; Shi, Di; Wang, Zhiwei; Huang, Junhui; Wang, Xu; Liu, Guodong; Tomsovic, Kevin
2017-01-01
Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus system demonstrate the effectiveness of the proposed planning model.
Mixed-Integer Formulations for Constellation Scheduling
NASA Astrophysics Data System (ADS)
Valicka, C.; Hart, W.; Rintoul, M.
Remote sensing systems have expanded the set of capabilities available for and critical to national security. Cooperating, high-fidelity sensing systems and growing mission applications have exponentially increased the set of potential schedules. A definitive lack of advanced tools places an increased burden on operators, as planning and scheduling remain largely manual tasks. This is particularly true in time-critical planning activities where operators aim to accomplish a large number of missions through optimal utilization of single or multiple sensor systems. Automated scheduling through identification and comparison of alternative schedules remains a challenging problem applicable across all remote sensing systems. Previous approaches focused on a subset of sensor missions and do not consider ad-hoc tasking. We have begun development of a robust framework that leverages the Pyomo optimization modeling language for the design of a tool to assist sensor operators planning under the constraints of multiple concurrent missions and uncertainty. Our scheduling models have been formulated to address the stochastic nature of ad-hoc tasks inserted under a variety of scenarios. Operator experience is being leveraged to select appropriate model objectives. Successful development of the framework will include iterative development of high-fidelity mission models that consider and expose various schedule performance metrics. Creating this tool will aid time-critical scheduling by increasing planning efficiency, clarifying the value of alternative modalities uniquely provided by multi-sensor systems, and by presenting both sets of organized information to operators. Such a tool will help operators more quickly and fully utilize sensing systems, a high interest objective within the current remote sensing operations community. Preliminary results for mixed-integer programming formulations of a sensor scheduling problem will be presented. Assumptions regarding sensor geometry
Solution of Mixed-Integer Programming Problems on the XT5
Hartman-Baker, Rebecca J; Busch, Ingrid Karin; Hilliard, Michael R; Middleton, Richard S; Schultze, Michael
2009-01-01
In this paper, we describe our experience with solving difficult mixed-integer linear programming problems (MILPs) on the petaflop Cray XT5 system at the National Center for Computational Sciences at Oak Ridge National Laboratory. We describe the algorithmic, software, and hardware needs for solving MILPs and present the results of using PICO, an open-source, parallel, mixed-integer linear programming solver developed at Sandia National Laboratories, to solve canonical MILPs as well as problems of interest arising from the logistics and supply chain management field.
Finding community structures in complex networks using mixed integer optimisation
NASA Astrophysics Data System (ADS)
Xu, G.; Tsoka, S.; Papageorgiou, L. G.
2007-11-01
The detection of community structure has been used to reveal the relationships between individual objects and their groupings in networks. This paper presents a mathematical programming approach to identify the optimal community structures in complex networks based on the maximisation of a network modularity metric for partitioning a network into modules. The overall problem is formulated as a mixed integer quadratic programming (MIQP) model, which can then be solved to global optimality using standard optimisation software. The solution procedure is further enhanced by developing special symmetry-breaking constraints to eliminate equivalent solutions. It is shown that additional features such as minimum/maximum module size and balancing among modules can easily be incorporated in the model. The applicability of the proposed optimisation-based approach is demonstrated by four examples. Comparative results with other approaches from the literature show that the proposed methodology has superior performance while global optimum is guaranteed.
Smalley, Hannah K; Keskinocak, Pinar; Swann, Julie; Hinman, Alan
2015-11-17
In addition to improved sanitation, hygiene, and better access to safe water, oral cholera vaccines can help to control the spread of cholera in the short term. However, there is currently no systematic method for determining the best allocation of oral cholera vaccines to minimize disease incidence in a population where the disease is endemic and resources are limited. We present a mathematical model for optimally allocating vaccines in a region under varying levels of demographic and incidence data availability. The model addresses the questions of where, when, and how many doses of vaccines to send. Considering vaccine efficacies (which may vary based on age and the number of years since vaccination), we analyze distribution strategies which allocate vaccines over multiple years. Results indicate that, given appropriate surveillance data, targeting age groups and regions with the highest disease incidence should be the first priority, followed by other groups primarily in order of disease incidence, as this approach is the most life-saving and cost-effective. A lack of detailed incidence data results in distribution strategies which are not cost-effective and can lead to thousands more deaths from the disease. The mathematical model allows for what-if analysis for various vaccine distribution strategies by providing the ability to easily vary parameters such as numbers and sizes of regions and age groups, risk levels, vaccine price, vaccine efficacy, production capacity and budget.
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei
2017-03-01
Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.
Constrained spacecraft reorientation using mixed integer convex programming
NASA Astrophysics Data System (ADS)
Tam, Margaret; Glenn Lightsey, E.
2016-10-01
A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.
NASA Astrophysics Data System (ADS)
Li, J. C.; Gong, B.; Wang, H. G.
2016-08-01
Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.
Fast scaffolding with small independent mixed integer programs
Salmela, Leena; Mäkinen, Veli; Välimäki, Niko; Ylinen, Johannes; Ukkonen, Esko
2011-01-01
Motivation: Assembling genomes from short read data has become increasingly popular, but the problem remains computationally challenging especially for larger genomes. We study the scaffolding phase of sequence assembly where preassembled contigs are ordered based on mate pair data. Results: We present MIP Scaffolder that divides the scaffolding problem into smaller subproblems and solves these with mixed integer programming. The scaffolding problem can be represented as a graph and the biconnected components of this graph can be solved independently. We present a technique for restricting the size of these subproblems so that they can be solved accurately with mixed integer programming. We compare MIP Scaffolder to two state of the art methods, SOPRA and SSPACE. MIP Scaffolder is fast and produces better or as good scaffolds as its competitors on large genomes. Availability: The source code of MIP Scaffolder is freely available at http://www.cs.helsinki.fi/u/lmsalmel/mip-scaffolder/. Contact: leena.salmela@cs.helsinki.fi PMID:21998153
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
and Matos (2012) the authors try to include a Markov Chain within an SDDP framework (see also Higle and Kempf 2011). However, the MSD framework...provides a more natural setting for such applications because the setup is based on a dynamic systems framework which admits Markov chains seamlessly... Engineering , University of Southern California, Los Angeles, CA 90089 April 2015 Abstract Mixed-Integer Programming has traditionally been
Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization
2014-08-01
infocenter/cosinfoc/v12r2/topic/com.ibm. common.doc/doc/banner.htm [18] “ GNU linear programming kit.” [Online]. Available: http://www.gnu. org/software...planning footstep placements for a robot walking on uneven terrain with obsta- cles, using a mixed-integer quadratically-constrained quadratic program ...CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING
Guo, P; Huang, G H
2009-01-01
In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence provides an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.
Orbital rendezvous mission planning using mixed integer nonlinear programming
NASA Astrophysics Data System (ADS)
Zhang, Jin; Tang, Guo-jin; Luo, Ya-Zhong; Li, Hai-yang
2011-04-01
The rendezvous and docking mission is usually divided into several phases, and the mission planning is performed phase by phase. A new planning method using mixed integer nonlinear programming, which investigates single phase parameters and phase connecting parameters simultaneously, is proposed to improve the rendezvous mission's overall performance. The design variables are composed of integers and continuous-valued numbers. The integer part consists of the parameters for station-keeping and sensor-switching, the number of maneuvers in each rendezvous phase and the number of repeating periods to start the rendezvous mission. The continuous part consists of the orbital transfer time and the station-keeping duration. The objective function is a combination of the propellant consumed, the sun angle which represents the power available, and the terminal precision of each rendezvous phase. The operational requirements for the spacecraft-ground communication, sun illumination and the sensor transition are considered. The simple genetic algorithm, which is a combination of the integer-coded and real-coded genetic algorithm, is chosen to obtain the optimal solution. A practical rendezvous mission planning problem is solved by the proposed method. The results show that the method proposed can solve the integral rendezvous mission planning problem effectively, and the solution obtained can satisfy the operational constraints and has a good overall performance.
Linderoth, Jeff T.; Luedtke, James R.
2013-05-30
The mathematical modeling of systems often requires the use of both nonlinear and discrete components. Problems involving both discrete and nonlinear components are known as mixed-integer nonlinear programs (MINLPs) and are among the most challenging computational optimization problems. This research project added to the understanding of this area by making a number of fundamental advances. First, the work demonstrated many novel, strong, tractable relaxations designed to deal with non-convexities arising in mathematical formulation. Second, the research implemented the ideas in software that is available to the public. Finally, the work demonstrated the importance of these ideas on practical applications and disseminated the work through scholarly journals, survey publications, and conference presentations.
A DSN optimal spacecraft scheduling model
NASA Technical Reports Server (NTRS)
Webb, W. A.
1982-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's Method and a heuristic starting algorithm.
Estimating Tree-Structured Covariance Matrices via Mixed-Integer Programming.
Bravo, Héctor Corrada; Wright, Stephen; Eng, Kevin H; Keles, Sündüz; Wahba, Grace
2009-01-01
We present a novel method for estimating tree-structured covariance matrices directly from observed continuous data. Specifically, we estimate a covariance matrix from observations of p continuous random variables encoding a stochastic process over a tree with p leaves. A representation of these classes of matrices as linear combinations of rank-one matrices indicating object partitions is used to formulate estimation as instances of well-studied numerical optimization problems.In particular, our estimates are based on projection, where the covariance estimate is the nearest tree-structured covariance matrix to an observed sample covariance matrix. The problem is posed as a linear or quadratic mixed-integer program (MIP) where a setting of the integer variables in the MIP specifies a set of tree topologies of the structured covariance matrix. We solve these problems to optimality using efficient and robust existing MIP solvers.We present a case study in phylogenetic analysis of gene expression and a simulation study comparing our method to distance-based tree estimating procedures.
An optimal spacecraft scheduling model for the NASA deep space network
NASA Technical Reports Server (NTRS)
Webb, W. A.
1985-01-01
A computer model is described which uses mixed-integer linear programming to provide optimal DSN spacecraft schedules given a mission set and specified scheduling requirements. A solution technique is proposed which uses Bender's method and a heuristic starting algorithm.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the
Designing cost-effective biopharmaceutical facilities using mixed-integer optimization.
Liu, Songsong; Simaria, Ana S; Farid, Suzanne S; Papageorgiou, Lazaros G
2013-01-01
Chromatography operations are identified as critical steps in a monoclonal antibody (mAb) purification process and can represent a significant proportion of the purification material costs. This becomes even more critical with increasing product titers that result in higher mass loads onto chromatography columns, potentially causing capacity bottlenecks. In this work, a mixed-integer nonlinear programming (MINLP) model was created and applied to an industrially relevant case study to optimize the design of a facility by determining the most cost-effective chromatography equipment sizing strategies for the production of mAbs. Furthermore, the model was extended to evaluate the ability of a fixed facility to cope with higher product titers up to 15 g/L. Examination of the characteristics of the optimal chromatography sizing strategies across different titer values enabled the identification of the maximum titer that the facility could handle using a sequence of single column chromatography steps as well as multi-column steps. The critical titer levels for different ratios of upstream to dowstream trains where multiple parallel columns per step resulted in the removal of facility bottlenecks were identified. Different facility configurations in terms of number of upstream trains were considered and the trade-off between their cost and ability to handle higher titers was analyzed. The case study insights demonstrate that the proposed modeling approach, combining MINLP models with visualization tools, is a valuable decision-support tool for the design of cost-effective facility configurations and to aid facility fit decisions. 2013.
2013-01-01
Background Recovering the network topology and associated kinetic parameter values from time-series data are central topics in systems biology. Nevertheless, methods that simultaneously do both are few and lack generality. Results Here, we present a rigorous approach for simultaneously estimating the parameters and regulatory topology of biochemical networks from time-series data. The parameter estimation task is formulated as a mixed-integer dynamic optimization problem with: (i) binary variables, used to model the existence of regulatory interactions and kinetic effects of metabolites in the network processes; and (ii) continuous variables, denoting metabolites concentrations and kinetic parameters values. The approach simultaneously optimizes the Akaike criterion, which captures the trade-off between complexity (measured by the number of parameters), and accuracy of the fitting. This simultaneous optimization mitigates a possible overfitting that could result from addition of spurious regulatory interactions. Conclusion The capabilities of our approach were tested in one benchmark problem. Our algorithm is able to identify a set of plausible network topologies with their associated parameters. PMID:24176044
A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.
1980-01-01
nature of the problem, auxiliary techniques including Lagrange multipliers, penalty functions, linearization, and rounding have all been used to aid in...result is a series of problems Pn with solutions Sn. If the sequence of problems is appropriately selected, two useful properties result. First...knowledge of 19 the solution to the (n)th problem aids in the solution of the (n+l)st problem. Second, the sequence of solutions Sn tends to the solution
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can
Managing daily surgery schedules in a teaching hospital: a mixed-integer optimization approach.
Pulido, Raul; Aguirre, Adrian M; Ortega-Mier, Miguel; García-Sánchez, Álvaro; Méndez, Carlos A
2014-10-15
This study examined the daily surgical scheduling problem in a teaching hospital. This problem relates to the use of multiple operating rooms and different types of surgeons in a typical surgical day with deterministic operation durations (preincision, incision, and postincision times). Teaching hospitals play a key role in the health-care system; however, existing models assume that the duration of surgery is independent of the surgeon's skills. This problem has not been properly addressed in other studies. We analyze the case of a Spanish public hospital, in which continuous pressures and budgeting reductions entail the more efficient use of resources. To obtain an optimal solution for this problem, we developed a mixed-integer programming model and user-friendly interface that facilitate the scheduling of planned operations for the following surgical day. We also implemented a simulation model to assist the evaluation of different dispatching policies for surgeries and surgeons. The typical aspects we took into account were the type of surgeon, potential overtime, idling time of surgeons, and the use of operating rooms. It is necessary to consider the expertise of a given surgeon when formulating a schedule: such skill can decrease the probability of delays that could affect subsequent surgeries or cause cancellation of the final surgery. We obtained optimal solutions for a set of given instances, which we obtained through surgical information related to acceptable times collected from a Spanish public hospital. We developed a computer-aided framework with a user-friendly interface for use by a surgical manager that presents a 3-D simulation of the problem. Additionally, we obtained an efficient formulation for this complex problem. However, the spread of this kind of operation research in Spanish public health hospitals will take a long time since there is a lack of knowledge of the beneficial techniques and possibilities that operational research can offer for
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution
Mixed-integer programming methods for transportation and power generation problems
NASA Astrophysics Data System (ADS)
Damci Kurt, Pelin
This dissertation conducts theoretical and computational research to solve challenging problems in application areas such as supply chain and power systems. The first part of the dissertation studies a transportation problem with market choice (TPMC) which is a variant of the classical transportation problem in which suppliers with limited capacities have a choice of which demands (markets) to satisfy. We show that TPMC is strongly NP-complete. We consider a version of the problem with a service level constraint on the maximum number of markets that can be rejected and show that if the original problem is polynomial, its cardinality-constrained version is also polynomial. We propose valid inequalities for mixed-integer cover and knapsack sets with variable upper bound constraints, which appear as substructures of TPMC and use them in a branch-and-cut algorithm to solve this problem. The second part of this dissertation studies a unit commitment (UC) problem in which the goal is to minimize the operational cost of power generators over a time period subject to physical constraints while satisfying demand. We provide several exponential classes of multi-period ramping and multi-period variable upper bound inequalities. We prove the strength of these inequalities and describe polynomial-time separation algorithms. Computational results show the effectiveness of the proposed inequalities when used as cuts in a branch-and-cut algorithm to solve the UC problem. The last part of this dissertation investigates the effects of uncertain wind power on the UC problem. A two-stage robust model and a three-stage stochastic program are compared.
NASA Astrophysics Data System (ADS)
Yin, Sisi; Nishi, Tatsushi
2014-11-01
Quantity discount policy is decision-making for trade-off prices between suppliers and manufacturers while production is changeable due to demand fluctuations in a real market. In this paper, quantity discount models which consider selection of contract suppliers, production quantity and inventory simultaneously are addressed. The supply chain planning problem with quantity discounts under demand uncertainty is formulated as a mixed-integer nonlinear programming problem (MINLP) with integral terms. We apply an outer-approximation method to solve MINLP problems. In order to improve the efficiency of the proposed method, the problem is reformulated as a stochastic model replacing the integral terms by using a normalisation technique. We present numerical examples to demonstrate the efficiency of the proposed method.
He, L; Huang, G H; Lu, H W
2009-04-01
A number of inexact programming methods have been developed for municipal solid waste management under uncertainty. However, most of them do not allow the parameters in the objective and constraints of a programming problem to be functional intervals (i.e., the lower and upper bounds of the intervals are functions of impact factors). In this study, a flexible interval mixed-integer bi-infinite programming (FIMIBIP) method is developed in response to the above concern. A case study is also conducted; the solutions are then compared with those obtained from interval mixed-integer bi-infinite programming (IMIBIP) and fuzzy interval mixed-integer programming (FIMIP) methods. It is indicated that the solutions through FIMIBIP can provide decision support for cost-effectively diverting municipal solid waste, and for sizing, timing and siting the facilities' expansion during the entire planning horizon. These schemes are more flexible than those identified through IMIBIP since the tolerance intervals are introduced to measure the level of constraints satisfaction. The FIMIBIP schemes may also be robust since the solutions are "globally-optimal" under all scenarios caused by the fluctuation of gas/energy prices, while the conventional ones are merely "locally-optimal" under a certain scenario.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Comparison of penalty functions on a penalty approach to mixed-integer optimization
NASA Astrophysics Data System (ADS)
Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2016-06-01
In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
He, Li; Huang, G H; Lu, Hongwei
2011-10-15
Recent studies indicated that municipal solid waste (MSW) is a major contributor to global warming due to extensive emissions of greenhouse gases (GHGs). However, most of them focused on investigating impacts of MSW on GHG emission amounts. This study presents two mixed integer bilevel decision-making models for integrated municipal solid waste management and GHG emissions control: MGU-MCL and MCU-MGL. The MGU-MCL model represents a top-down decision process, with the environmental sectors at the national level dominating the upper-level objective and the waste management sectors at the municipal level providing the lower-level objective. The MCU-MGL model implies a bottom-up decision process where municipality plays a leading role. Results from the models indicate that: the top-down decisions would reduce metric tonne carbon emissions (MTCEs) by about 59% yet increase about 8% of the total management cost; the bottom-up decisions would reduce MTCE emissions by about 13% but increase the total management cost very slightly; on-site monitoring and downscaled laboratory experiments are still required for reducing uncertainty in GHG emission rate from the landfill facility. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimization of a wood dryer kiln using the mixed integer programming technique: A case study
Gustafsson, S.I.
1999-07-01
When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
Dr. Matthew J. Bays Automation and Dynamics Branch Unmanned Systems and Threat Analysis Division Science and Technology Department Dr...respective aircraft, garbage trucks and accompanying garbage workers, or mail delivery and their respective postmen. While this form of close interaction...Decision Technologies 37 (2007) 165–181. [34] H. V. Poor, An Introduction to Signal Detection and Estimation, Springer, 1994. [35] IBM Corporation, IBM
MDTri: Robust and Efficient Global Mixed Integer Search of Spaces of Multiple Ternary Alloys
Graf, Peter A; Billups, Stephen
2017-07-24
Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Finally, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less
NASA Astrophysics Data System (ADS)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-08-01
In this paper thermo-chemical simulation of the pultrusion process of a composite rod is first used as a validation case to ensure that the utilized numerical scheme is stable and converges to results given in literature. Following this validation case, a cylindrical die block with heaters is added to the pultrusion domain of a composite part and thermal contact resistance (TCR) regions at the die-part interface are defined. Two optimization case studies are performed on this new configuration. In the first one, optimal die radius and TCR values are found by using a hybrid genetic algorithm based on a sequential combination of a genetic algorithm (GA) and a local search technique to fit the centerline temperature of the composite with the one calculated in the validation case. In the second optimization study, the productivity of the process is improved by using a mixed integer genetic algorithm (MIGA) such that the total number of heaters is minimized while satisfying the constraints for the maximum composite temperature, the mean of the cure degree at the die exit and the pulling speed.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their
Guo, P.; Huang, G.H.
2010-03-15
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
Zou, Meng; Zhang, Peng-Jun; Wen, Xin-Yu; Chen, Luonan; Tian, Ya-Ping; Wang, Yong
2015-07-15
Multi-biomarker panels can capture the nonlinear synergy among biomarkers and they are important to aid in the early diagnosis and ultimately battle complex diseases. However, identification of these multi-biomarker panels from case and control data is challenging. For example, the exhaustive search method is computationally infeasible when the data dimension is high. Here, we propose a novel method, MILP_k, to identify serum-based multi-biomarker panel to distinguish colorectal cancers (CRC) from benign colorectal tumors. Specifically, the multi-biomarker panel detection problem is modeled by a mixed integer programming to maximize the classification accuracy. Then we measured the serum profiling data for 101 CRC patients and 95 benign patients. The 61 biomarkers were analyzed individually and further their combinations by our method. We discovered 4 biomarkers as the optimal small multi-biomarker panel, including known CRC biomarkers CEA and IL-10 as well as novel biomarkers IMA and NSE. This multi-biomarker panel obtains leave-one-out cross-validation (LOOCV) accuracy to 0.7857 by nearest centroid classifier. An independent test of this panel by support vector machine (SVM) with threefold cross validation gets an AUC 0.8438. This greatly improves the predictive accuracy by 20% over the single best biomarker. Further extension of this 4-biomarker panel to a larger 13-biomarker panel improves the LOOCV to 0.8673 with independent AUC 0.8437. Comparison with the exhaustive search method shows that our method dramatically reduces the searching time by 1000-fold. Experiments on the early cancer stage samples reveal two panel of biomarkers and show promising accuracy. The proposed method allows us to select the subset of biomarkers with best accuracy to distinguish case and control samples given the number of selected biomarkers. Both receiver operating characteristic curve and precision-recall curve show our method's consistent performance gain in accuracy. Our method
NASA Astrophysics Data System (ADS)
Purnomo, Muhammad Ridwan Andi; Satrio Wiwoho, Yoga
2016-01-01
Facility layout becomes one of production system factor that should be managed well, as it is designated for the location of production. In managing the layout, designing the layout by considering the optimal layout condition that supports the work condition is essential. One of the method for facility layout optimization is Mixed Integer Programming (MIP). In this study, the MIP is solved using Lingo 9.0 software and considering quantitative and qualitative objectives to be achieved simultaneously: minimizing material handling cost, maximizing closeness rating, and minimizing re-layout cost. The research took place in Rekayasa Wangdi as a make to order company, focusing on the making of concrete brick dough stirring machine with 10 departments involved. The result shows an improvement in the new layout for 333,72 points of objective value compared with the initial layout. As the conclusion, the proposed MIP is proven to be used to model facility layout problem under multi objective consideration for a more realistic look.
Villante, F. L.; Ricci, B.
2010-05-01
We present a new approach to studying the properties of the Sun. We consider small variations of the physical and chemical properties of the Sun with respect to standard solar model predictions and we linearize the structure equations to relate them to the properties of the solar plasma. By assuming that the (variation of) present solar composition can be estimated from the (variation of) nuclear reaction rates and elemental diffusion efficiency in the present Sun, we obtain a linear system of ordinary differential equations which can be used to calculate the response of the Sun to an arbitrary modification of the input parameters (opacity, cross sections, etc.). This new approach is intended to be a complement to the traditional methods for solar model (SM) calculation and allows us to investigate in a more efficient and transparent way the role of parameters and assumptions in SM construction. We verify that these linear solar models recover the predictions of the traditional SMs with a high level of accuracy.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price
2006-01-01
model is called the (deter- ministic) capacitated facility-location problem with sole-sourcing (FLP) ( Barcelo and Casanova [5]). Assume now that some...Appelgren, L.H.: A column generation algorithm for a ship scheduling problem. Transp. Sci. 3, 53–68 (1969) 5. Barcelo , J., Casanova, J.: A heuristic
Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Jiapei; Chen, Xiujuan; Li, Kailong
2017-02-16
As presented in the first companion paper, distributed mixed-integer fuzzy hierarchical programming (DMIFHP) was developed for municipal solid waste management (MSWM) under complexities of heterogeneities, hierarchy, discreteness, and interactions. Beijing was selected as a representative case. This paper focuses on presenting the obtained schemes and the revealed mechanisms of the Beijing MSWM system. The optimal MSWM schemes for Beijing under various solid waste treatment policies and their differences are deliberated. The impacts of facility expansion, hierarchy, and spatial heterogeneities and potential extensions of DMIFHP are also discussed. A few of findings are revealed from the results and a series of comparisons and analyses. For instance, DMIFHP is capable of robustly reflecting these complexities in MSWM systems, especially for Beijing. The optimal MSWM schemes are of fragmented patterns due to the dominant role of the proximity principle in allocating solid waste treatment resources, and they are closely related to regulated ratios of landfilling, incineration, and composting. Communities without significant differences among distances to different types of treatment facilities are more sensitive to these ratios than others. The complexities of hierarchy and heterogeneities pose significant impacts on MSWM practices. Spatial dislocation of MSW generation rates and facility capacities caused by unreasonable planning in the past may result in insufficient utilization of treatment capacities under substantial influences of transportation costs. The problems of unreasonable MSWM planning, e.g., severe imbalance among different technologies and complete vacancy of ten facilities, should be gained deliberation of the public and the municipal or local governments in Beijing. These findings are helpful for gaining insights into MSWM systems under these complexities, mitigating key challenges in the planning of these systems, improving the related management
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2016-11-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Equivalent Linear Logistic Test Models.
ERIC Educational Resources Information Center
Bechger, Timo M.; Verstralen, Huub H. F. M.; Verhelst, Norma D.
2002-01-01
Discusses the Linear Logistic Test Model (LLTM) and demonstrates that there are many equivalent ways to specify a model. Analyzed a real data set (300 responses to 5 analogies) using a Lagrange multiplier test for the specification of the model, and demonstrated that there may be many ways to change the specification of an LLTM and achieve the…
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
Puerto Rico water resources planning model program description
Moody, D.W.; Maddock, Thomas; Karlinger, M.R.; Lloyd, J.J.
1973-01-01
Because the use of the Mathematical Programming System -Extended (MPSX) to solve large linear and mixed integer programs requires the preparation of many input data cards, a matrix generator program to produce the MPSX input data from a much more limited set of data may expedite the use of the mixed integer programming optimization technique. The Model Definition and Control Program (MODCQP) is intended to assist a planner in preparing MPSX input data for the Puerto Rico Water Resources Planning Model. The model utilizes a mixed-integer mathematical program to identify a minimum present cost set of water resources projects (diversions, reservoirs, ground-water fields, desalinization plants, water treatment plants, and inter-basin transfers of water) which will meet a set of future water demands and to determine their sequence of construction. While MODCOP was specifically written to generate MPSX input data for the planning model described in this report, the program can be easily modified to reflect changes in the model's mathematical structure.
2006-05-01
on a Sunblade 1000 computer with 1 GB RAM, while also conducting additional runs calculating lower bounds on a Beowulf Parallel Cluster with 96...problem instances, allowing us to reduce the optimality gap, as shown in third column of the table. We perform these lengthy runs on a Beowulf
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of
NASA Astrophysics Data System (ADS)
Kobayashi, Koichi; Hiraishi, Kunihiko
The model predictive/optimal control problem for hybrid systems is reduced to a mixed integer quadratic programming (MIQP) problem. However, the MIQP problem has one serious weakness, i.e., the computation time to solve the MIQP problem is too long for practical plants. For overcoming this technical issue, there are several approaches. In this paper, a modeling of mode transition constraints, which are expressed by a directed graph, is focused, and a new method to represent a directed graph is proposed. The effectiveness of the proposed method is shown by numerical examples on linear switched systems and piecewise linear systems.
Wang, S; Huang, G H
2013-03-15
Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints.
Graf, Peter A.; Billups, Stephen
2017-07-24
Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Linear systems, and ARMA- and Fliess models
NASA Astrophysics Data System (ADS)
Lomadze, Vakhtang; Khurram Zafar, M.
2010-10-01
Linear (dynamical) systems are central objects of study (in linear system theory), and ARMA- and Fliess models are two very important classes of models that are used to represent them. This article is concerned with the question of what is a relation between them (in case of higher dimensions). It is shown that the category of linear systems, the 'weak' category of ARMA-models and the category of Fliess models are equivalent to each other.
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Investigating data envelopment analysis model with potential improvement for integer output values
NASA Astrophysics Data System (ADS)
Hussain, Mushtaq Taleb; Ramli, Razamin; Khalid, Ruzelan
2015-12-01
The decrement of input proportions in DEA model is associated with its input reduction. This reduction is apparently good for economy since it could reduce unnecessary cost resources. However, in some situations the reduction of relevant inputs such as labour could create social problems. Such inputs should thus be maintained or increased. This paper develops an advanced radial DEA model dealing with mixed integer linear programming to improve integer output values through the combination of inputs. The model can deal with real input values and integer output values. This model is valuable for situations dealing with input combination to improve integer output values as faced by most organizations.
MILP model for resource disruption in parallel processor system
NASA Astrophysics Data System (ADS)
Nordin, Syarifah Zyurina; Caccetta, Louis
2015-02-01
In this paper, we consider the existence of disruption on unrelated parallel processor scheduling system. The disruption occurs due to a resource shortage where one of the parallel processors is facing breakdown problem during the task allocation, which give impact to the initial scheduling plan. Our objective is to reschedule the original unrelated parallel processor scheduling after the resource disruption that minimizes the makespan. A mixed integer linear programming model is presented for the recovery scheduling that considers the post-disruption policy. We conduct a computational experiment with different stopping time limit to see the performance of the model by using CPLEX 12.1 solver in AIMMS 3.10 software.
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty examples from the literature. |
A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation
Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin
2016-01-01
This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Spaghetti Bridges: Modeling Linear Relationships
ERIC Educational Resources Information Center
Kroon, Cindy D.
2016-01-01
Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…
Spaghetti Bridges: Modeling Linear Relationships
ERIC Educational Resources Information Center
Kroon, Cindy D.
2016-01-01
Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Reasons for Hierarchical Linear Modeling: A Reminder.
ERIC Educational Resources Information Center
Wang, Jianjun
1999-01-01
Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)
Reasons for Hierarchical Linear Modeling: A Reminder.
ERIC Educational Resources Information Center
Wang, Jianjun
1999-01-01
Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
Semi-Parametric Generalized Linear Models.
1985-08-01
is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS
Congeneric Models and Levine's Linear Equating Procedures.
ERIC Educational Resources Information Center
Brennan, Robert L.
In 1955, R. Levine introduced two linear equating procedures for the common-item non-equivalent populations design. His procedures make the same assumptions about true scores; they differ in terms of the nature of the equating function used. In this paper, two parameterizations of a classical congeneric model are introduced to model the variables…
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Space Surveillance Network Scheduling Under Uncertainty: Models and Benefits
NASA Astrophysics Data System (ADS)
Valicka, C.; Garcia, D.; Staid, A.; Watson, J.; Rintoul, M.; Hackebeil, G.; Ntaimo, L.
2016-09-01
Advances in space technologies continue to reduce the cost of placing satellites in orbit. With more entities operating space vehicles, the number of orbiting vehicles and debris has reached unprecedented levels and the number continues to grow. Sensor operators responsible for maintaining the space catalog and providing space situational awareness face an increasingly complex and demanding scheduling requirements. Despite these trends, a lack of advanced tools continues to prevent sensor planners and operators from fully utilizing space surveillance resources. One key challenge involves optimally selecting sensors from a network of varying capabilities for missions with differing requirements. Another open challenge, the primary focus of our work, is building robust schedules that effectively plan for uncertainties associated with weather, ad hoc collections, and other target uncertainties. Existing tools and techniques are not amenable to rigorous analysis of schedule optimality and do not adequately address the presented challenges. Building on prior research, we have developed stochastic mixed-integer linear optimization models to address uncertainty due to weather's effect on collection quality. By making use of the open source Pyomo optimization modeling software, we have posed and solved sensor network scheduling models addressing both forms of uncertainty. We present herein models that allow for concurrent scheduling of collections with the same sensor configuration and for proactively scheduling against uncertain ad hoc collections. The suitability of stochastic mixed-integer linear optimization for building sensor network schedules under different run-time constraints will be discussed.
Managing clustered data using hierarchical linear modeling.
Warne, Russell T; Li, Yan; McKyer, E Lisako J; Condie, Rachel; Diep, Cassandra S; Murano, Peter S
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence assumption and lead to correct analysis of data, yet it is rarely used in nutrition research. The purpose of this viewpoint is to illustrate the benefits of hierarchical linear modeling within a nutrition research context. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Are all Linear Paired Comparison Models Equivalent
1990-09-01
Previous authors (Jackson and Fleckenstein 1957, Mosteller 1958, Noether 1960) have found that different models of paired comparisons data lead to simi...ponential distribution with a location parameter (Mosteller 1958, Noether 1960). Formal statements describing the limiting behavior of the gamma...that are not convolu- tion type linear models (the uniform model considered by Smith (1956), Mosteller (1958), Noether (1960)) and other convolution
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h(2 ) < 0.08 and r < 0.13, for linear models; h(2 ) > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
Managing Clustered Data Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Bayesian Methods for High Dimensional Linear Models
Mallick, Himel; Yi, Nengjun
2013-01-01
In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Non-linear memristor switching model
NASA Astrophysics Data System (ADS)
Chernov, A. A.; Islamov, D. R.; Pik'nik, A. A.
2016-10-01
We introduce a thermodynamical model of filament growing when a current pulse via memristor flows. The model is the boundary value problem, which includes nonstationary heat conduction equation with non-linear Joule heat source, Poisson equation, and Shockley- Read-Hall equations taking into account strong electron-phonon interactions in trap ionization and charge transport processes. The charge current, which defines the heating in the model, depends on the rate of the oxygen vacancy generation. The latter depends on the local temperature. The solution of the introduced problem allows one to describe the kinetics of the switch process and the final filament morphology.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Synaptic dynamics: linear model and adaptation algorithm.
Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W
2014-08-01
In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Nonlinear damping and quasi-linear modelling.
Elliott, S J; Ghandchi Tehrani, M; Langley, R S
2015-09-28
The mechanism of energy dissipation in mechanical systems is often nonlinear. Even though there may be other forms of nonlinearity in the dynamics, nonlinear damping is the dominant source of nonlinearity in a number of practical systems. The analysis of such systems is simplified by the fact that they show no jump or bifurcation behaviour, and indeed can often be well represented by an equivalent linear system, whose damping parameters depend on the form and amplitude of the excitation, in a 'quasi-linear' model. The diverse sources of nonlinear damping are first reviewed in this paper, before some example systems are analysed, initially for sinusoidal and then for random excitation. For simplicity, it is assumed that the system is stable and that the nonlinear damping force depends on the nth power of the velocity. For sinusoidal excitation, it is shown that the response is often also almost sinusoidal, and methods for calculating the amplitude are described based on the harmonic balance method, which is closely related to the describing function method used in control engineering. For random excitation, several methods of analysis are shown to be equivalent. In general, iterative methods need to be used to calculate the equivalent linear damper, since its value depends on the system's response, which itself depends on the value of the equivalent linear damper. The power dissipation of the equivalent linear damper, for both sinusoidal and random cases, matches that dissipated by the nonlinear damper, providing both a firm theoretical basis for this modelling approach and clear physical insight. Finally, practical examples of nonlinear damping are discussed: in microspeakers, vibration isolation, energy harvesting and the mechanical response of the cochlea.
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
B-737 Linear Autoland Simulink Model
NASA Technical Reports Server (NTRS)
Belcastro, Celeste (Technical Monitor); Hogge, Edward F.
2004-01-01
The Linear Autoland Simulink model was created to be a modular test environment for testing of control system components in commercial aircraft. The input variables, physical laws, and referenced frames used are summarized. The state space theory underlying the model is surveyed and the location of the control actuators described. The equations used to realize the Dryden gust model to simulate winds and gusts are derived. A description of the pseudo-random number generation method used in the wind gust model is included. The longitudinal autopilot, lateral autopilot, automatic throttle autopilot, engine model and automatic trim devices are considered as subsystems. The experience in converting the Airlabs FORTRAN aircraft control system simulation to a graphical simulation tool (Matlab/Simulink) is described.
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Running vacuum cosmological models: linear scalar perturbations
NASA Astrophysics Data System (ADS)
Perico, E. L. D.; Tamayo, D. A.
2017-08-01
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ(H2) or Λ(R). Such models assume an equation of state for the vacuum given by bar PΛ = - bar rhoΛ, relating its background pressure bar PΛ with its mean energy density bar rhoΛ ≡ Λ/8πG. This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely bar rhoΛ = Σibar rhoΛi. Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ(H2) scenario the vacuum is coupled with every matter component, whereas the Λ(R) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
Estimating population trends with a linear model
Bart, J.; Collins, B.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Wealth redistribution in conservative linear kinetic models
NASA Astrophysics Data System (ADS)
Toscani, G.
2009-10-01
We introduce and discuss kinetic models for wealth distribution which include both taxation and uniform redistribution. The evolution of the continuous density of wealth obeys a linear Boltzmann equation where the background density represents the action of an external subject on the taxation mechanism. The case in which the mean wealth is conserved is analyzed in full details, by recovering the analytical form of the steady states. These states are probability distributions of convergent random series of a special structure, called perpetuities. Among others, Gibbs distribution appears as steady state in case of total taxation and uniform redistribution.
The Piecewise Linear Reactive Flow Rate Model
Vitello, P; Souers, P C
2005-07-22
Conclusions are: (1) Early calibrations of the Piece Wise Linear reactive flow model have shown that it allows for very accurate agreement with data for a broad range of detonation wave strengths. (2) The ability to vary the rate at specific pressures has shown that corner turning involves competition between the strong wave that travels roughly in a straight line and growth at low pressure of a new wave that turns corners sharply. (3) The inclusion of a low pressure de-sensitization rate is essential to preserving the dead zone at large times as is observed.
The Piece Wise Linear Reactive Flow Model
Vitello, P; Souers, P C
2005-08-18
For non-ideal explosives a wide range of behavior is observed in experiments dealing with differing sizes and geometries. A predictive detonation model must be able to reproduce many phenomena including such effects as: variations in the detonation velocity with the radial diameter of rate sticks; slowing of the detonation velocity around gentle corners; production of dead zones for abrupt corner turning; failure of small diameter rate sticks; and failure for rate sticks with sufficiently wide cracks. Most models have been developed to explain one effect at a time. Often, changes are made in the input parameters used to fit each succeeding case with the implication that this is sufficient for the model to be valid over differing regimes. We feel that it is important to develop a model that is able to fit experiments with one set of parameters. To address this we are creating a new generation of models that are able to produce better fitting to individual data sets than prior models and to simultaneous fit distinctly different regimes of experiments. Presented here are details of our new Piece Wise Linear reactive flow model applied to LX-17.
NASA Astrophysics Data System (ADS)
Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed
2015-04-01
This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Ira Remsen, saccharin, and the linear model.
Warner, Deborah J
2008-03-01
While working in the chemistry laboratory at Johns Hopkins University, Constantin Fahlberg oxidized the 'ortho-sulfamide of benzoic acid' and, by chance, found the result to be incredibly sweet. Several years later, now working on his own, he termed this stuff saccharin, developed methods of making it in quantity, obtained patents on these methods, and went into production. As the industrial and scientific value of saccharin became apparent, Ira Remsen pointed out that the initial work had been done in his laboratory and at his suggestion. The ensuing argument, carried out in the courts of law and public opinion, illustrates the importance of the linear model to scientists who staked their identities on the model of disinterested research but who also craved credit for important practical results.
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
Numerical linearized MHD model of flapping oscillations
NASA Astrophysics Data System (ADS)
Korovinskiy, D. B.; Ivanov, I. B.; Semenov, V. S.; Erkaev, N. V.; Kiehas, S. A.
2016-06-01
Kink-like magnetotail flapping oscillations in a Harris-like current sheet with earthward growing normal magnetic field component Bz are studied by means of time-dependent 2D linearized MHD numerical simulations. The dispersion relation and two-dimensional eigenfunctions are obtained. The results are compared with analytical estimates of the double-gradient model, which are found to be reliable for configurations with small Bz up to values ˜ 0.05 of the lobe magnetic field. Coupled with previous results, present simulations confirm that the earthward/tailward growth direction of the Bz component acts as a switch between stable/unstable regimes of the flapping mode, while the mode dispersion curve is the same in both cases. It is confirmed that flapping oscillations may be triggered by a simple Gaussian initial perturbation of the Vz velocity.
Linear programming models for cost reimbursement.
Diehr, G; Tamura, H
1989-01-01
Tamura, Lauer, and Sanborn (1985) reported a multiple regression approach to the problem of determining a cost reimbursement (rate-setting) formula for facilities providing long-term care (nursing homes). In this article we propose an alternative approach to this problem, using an absolute-error criterion instead of the least-squares criterion used in regression, with a variety of side constraints incorporated in the derivation of the formula. The mathematical tool for implementation of this approach is linear programming (LP). The article begins with a discussion of the desirable characteristics of a rate-setting formula. The development of a formula with these properties can be easily achieved, in terms of modeling as well as computation, using LP. Specifically, LP provides an efficient computational algorithm to minimize absolute error deviation, thus protecting rates from the effects of unusual observations in the data base. LP also offers modeling flexibility to impose a variety of policy controls. These features are not readily available if a least-squares criterion is used. Examples based on actual data are used to illustrate alternative LP models for rate setting. PMID:2759871
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
From linear to generalized linear mixed models: A case study in repeated measures
USDA-ARS?s Scientific Manuscript database
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction
Guo, P; Huang, G H
2009-01-01
In this study, a solid waste decision-support system was developed for the long-term planning of waste management in the City of Regina, Canada. Interactions among various system components, objectives, and constraints will be analyzed. Issues concerning planning for cost-effective diversion and prolongation of the landfill will be addressed. Decisions of system-capacity expansion and waste allocation within a multi-facility, multi-option, and multi-period context will be obtained. The obtained results would provide useful information and decision-support for the City's solid waste management and planning. In the application, four scenarios are considered. Through the above scenario analyses under different waste-management policies, useful decision support for the City's solid waste managers and decision makers was generated. Analyses for the effects of varied policies (for allowable waste flows to different facilities) under 35 and 50% diversion goals were also undertaken. Tradeoffs among system cost and constraint-violation risk were analyzed. Generally, a policy with lower allowable waste-flow levels corresponded to a lower system cost under advantageous conditions but, at the same time, a higher penalty when such allowances were violated. A policy with higher allowable flow levels corresponded to a higher cost under disadvantageous conditions. The modeling results were useful for (i) scheduling adequate time and capacity for long-term planning of the facility development and/or expansion in the city's waste management system, (ii) adjusting of the existing waste flow allocation patterns to satisfy the city's diversion goal, and (iii) generating of desired policies for managing the city's waste generation, collection and disposal.
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
The effect of non-linear human visual system components on linear model observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2004-05-01
Linear model observers have been used successfully to predict human performance in clinically relevant visual tasks for a variety of backgrounds. On the other hand, there has been another family of models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks). These masking models usually include a number of non-linear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The relationship between these two traditions of models has not been extensively investigated in the context of detection in noise. In this paper, we evaluated the effect of including some of these non-linear components into a linear channelized Hotelling observer (CHO), and the associated practical implications for medical image quality evaluation. In particular, we evaluate whether the rank order evaluation of two compression algorithms (JPEG vs. JPEG 2000) is changed by inclusion of the non-linear components. The results show: a) First that the simpler linear CHO model observer outperforms CHO model with the nonlinear components investigated. b) The rank order of model observer performance for the compression algorithms did not vary when the non-linear components were included. For the present task, the results suggest that the addition of the physiologically based channel non-linearities to a channelized Hotelling might add complexity to the model observers without great impact on medical image quality evaluation.
Learning generative models of natural images.
Wu, Jiann-Ming; Lin, Zheng-Han
2002-04-01
This work proposes an unsupervised learning process for analysis of natural images. The derivation is based on a generative model, a stochastic coin-flip process directly operating on many disjoint multivariate Gaussian distributions. Following the maximal likelihood principle and using the Potts encoding, the goodness-of-fit of the generative model to tremendous patches randomly sampled from natural images is quantitatively expressed by an objective function subject to a set of constraints. By further combination of the objective function and the minimal wiring criterion, we achieve a mixed integer and linear programming. A hybrid of the mean field annealing and the gradient descent method is applied to the mathematical framework and produces three sets of interactive dynamics for the learning process. Numerical simulations show that the learning process is effective for extraction of orientation, localization and bandpass features and the generative model can make an ensemble of a sparse code for natural images.
Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments
NASA Astrophysics Data System (ADS)
Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh
2017-05-01
Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.
Reduction techniques and model analysis for linear models
Amhemad, A.; Lucas, C.A.
1994-12-31
Techniques for reducing the complexity of linear programs are well known. By suitable analysis many model redundancies can be removed and inconsistencies detected before an attempt is made in optimising a linear programming model. In carrying out such analysis, a structured approach is presented whereby an efficient amount of bound analysis is carried out under a row ranking scheme. In detecting new lower bounds for variables, these can be included in a starting basis by making such columns free variables. Quite often introducing new upper bounds, results in the model being more difficult to solve. We include our investigations into a strategy for deciding which new upper bounds should be passed to the optimiser. Finally most model reduction is carried out on models created by a modeling language. To aid the teaching of modeling analysis we show how such a procedure can be embedded in a modeling language and how the analysis can be presented to the modeller. We also present discussions on how the solution to a preprocessed problem, is post processed to present the solution in terms of the original problem.
Approximately Integrable Linear Statistical Models in Non-Parametric Estimation
1990-08-01
OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the
Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.
Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad
2016-02-01
In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
Linear control theory for gene network modeling.
Shin, Yong-Jun; Bleris, Leonidas
2010-09-16
Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.
Recent Updates to the GEOS-5 Linear Model
NASA Technical Reports Server (NTRS)
Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul
2014-01-01
Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.
Tried and True: Springing into Linear Models
ERIC Educational Resources Information Center
Darling, Gerald
2012-01-01
In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…
Three-Dimensional Modeling in Linear Regression.
ERIC Educational Resources Information Center
Herman, James D.
Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…
Tried and True: Springing into Linear Models
ERIC Educational Resources Information Center
Darling, Gerald
2012-01-01
In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…
Valuation of financial models with non-linear state spaces
NASA Astrophysics Data System (ADS)
Webber, Nick
2001-02-01
A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.
Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies
Stoll, Brady; Brinkman, Gregory; Townsend, Aaron; Bloom, Aaron
2016-01-01
Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0
Linear and Nonlinear Models of Agenda Setting in Television.
ERIC Educational Resources Information Center
Brosius, Hans-Bernd; Kepplinger, Hans Mathias
1992-01-01
A content analysis of major German television news shows and 53 weekly surveys on 16 issues were used to compare linear and nonlinear models as ways to describe the relationship between media coverage and the public agenda. Results indicate that nonlinear models are in some cases superior to linear models in terms of explained variance. (34…
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
ERIC Educational Resources Information Center
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A
2006-01-01
A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
NASA Astrophysics Data System (ADS)
Orsolini, Y.; Leovy, C. B.
1993-12-01
A quasi-geostrophic midlatitude beta-plane linear model is here used to study whether the decay with height and meridional circulations of near-steady jets in the tropospheric circulation of Jupiter arise as a means of stabilizing a deep zonal flow that extends into the upper troposphere. The model results obtained are analogous to the stabilizing effect of meridional shear on baroclinic instabilities. In the second part of this work, a quasi-linear model is used to investigate how an initially barotropically unstable flow develops a quasi-steady shear zone in the lower scale heights of the model domain, due to the action of the eddy fluxes.
Modeling of linear time-varying systems by linear time-invariant systems of lower order.
NASA Technical Reports Server (NTRS)
Nosrati, H.; Meadows, H. E.
1973-01-01
A method for modeling linear time-varying differential systems by linear time-invariant systems of lower order is proposed, extending the results obtained by Bierman (1972) by resolving such qualities as the model stability, various possible models of differing dimensions, and the uniqueness or nonuniqueness of the model coefficient matrix. In addition to the advantages cited by Heffes and Sarachik (1969) and Bierman, often by modeling a subsystem of a larger system it is possible to analyze the overall system behavior more easily, with resulting savings in computation time.
Estimation of the linear mixed integrated Ornstein-Uhlenbeck model.
Hughes, Rachael A; Kenward, Michael G; Sterne, Jonathan A C; Tilling, Kate
2017-05-24
The linear mixed model with an added integrated Ornstein-Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance).
An analytically linearized helicopter model with improved modeling accuracy
NASA Technical Reports Server (NTRS)
Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.
1991-01-01
An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.
Development of a Linear Stirling Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
Descriptive Linear modeling of steady-state visual evoked response
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.; Kenner, K.
1986-01-01
A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.
Graphical Tools for Linear Structural Equation Modeling
2014-06-01
equally valuable in their bias-reduction potential (Pearl and Paz , 2010). This problem pertains to prediction tasks as well. A researcher wishing to predict...in the regression Y = αX+β1Z1+...+βnZn+n, or equivalently, when does βYX.Z = βYX.W? Here we adapt Theorem 3 in (Pearl and Paz , 2010) for linear SEMs...Identification of Causal Mediation,” (R-389). Pearl, J. and Paz , A. (2010). Confounding equivalence in causal equivalence. In Proceedings of the Twenty
Neural network models for Linear Programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to present a neural network that solves the general Linear Programming (LP) problem. In the first part, we recall Hopfield and Tank's circuit for LP and show that although it converges to stable states, it does not, in general, yield admissible solutions. This is due to the penalization treatment of the constraints. In the second part, we propose an approach based on Lagragrange multipliers that converges to primal and dual admissible solutions. We also show that the duality gap (measuring the optimality) can be rendered, in principle, as small as needed. 11 refs.
Applications of the Linear Logistic Test Model in Psychometric Research
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2009-01-01
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
Applications of the Linear Logistic Test Model in Psychometric Research
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2009-01-01
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
A Model for Quadratic Outliers in Linear Regression.
ERIC Educational Resources Information Center
Elashoff, Janet Dixon; Elashoff, Robert M.
This paper introduces a model for describing outliers (observations which are extreme in some sense or violate the apparent pattern of other observations) in linear regression which can be viewed as a mixture of a quadratic and a linear regression. The maximum likelihood estimators of the parameters in the model are derived and their asymptotic…
Modeling Non-Linear Material Properties in Composite Materials
2016-06-28
Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions
Modeling local item dependence with the hierarchical generalized linear model.
Jiao, Hong; Wang, Shudong; Kamata, Akihito
2005-01-01
Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Energy-efficient container handling using hybrid model predictive control
NASA Astrophysics Data System (ADS)
Xin, Jianbin; Negenborn, Rudy R.; Lodewijks, Gabriel
2015-11-01
The performance of container terminals needs to be improved to adapt the growth of containers while maintaining sustainability. This paper provides a methodology for determining the trajectory of three key interacting machines for carrying out the so-called bay handling task, involving transporting containers between a vessel and the stacking area in an automated container terminal. The behaviours of the interacting machines are modelled as a collection of interconnected hybrid systems. Hybrid model predictive control (MPC) is proposed to achieve optimal performance, balancing the handling capacity and energy consumption. The underlying control problem is hereby formulated as a mixed-integer linear programming problem. Simulation studies illustrate that a higher penalty on energy consumption indeed leads to improved sustainability using less energy. Moreover, simulations illustrate how the proposed energy-efficient hybrid MPC controller performs under different types of uncertainties.
NASA Astrophysics Data System (ADS)
Chiadamrong, N.; Piyathanavong, V.
2017-04-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Non-linear transformer modeling and simulation
Archer, W.E.; Deveney, M.F.; Nagel, R.L.
1994-08-01
Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.
A general non-linear multilevel structural equation mixture model
Kelava, Augustin; Brandt, Holger
2014-01-01
In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022
A mathematical model for municipal solid waste management - A case study in Hong Kong.
Lee, C K M; Yeung, C L; Xiong, Z R; Chung, S H
2016-12-01
With the booming economy and increasing population, the accumulation of waste has become an increasingly arduous issue and has aroused the attention from all sectors of society. Hong Kong which has a relative high daily per capita domestic waste generation rate in Asia has not yet established a comprehensive waste management system. This paper conducts a review of waste management approaches and models. Researchers highlight that mathematical models provide useful information for decision-makers to select appropriate choices and save cost. It is suggested to consider municipal solid waste management in a holistic view and improve the utilization of waste management infrastructures. A mathematical model which adopts integer linear programming and mixed integer programming has been developed for Hong Kong municipal solid waste management. A sensitivity analysis was carried out to simulate different scenarios which provide decision-makers important information for establishing Hong Kong waste management system. Copyright © 2016 Elsevier Ltd. All rights reserved.
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
Neural network modelling of non-linear hydrological relationships
NASA Astrophysics Data System (ADS)
Abrahart, R. J.; See, L. M.
2007-09-01
Two recent studies have suggested that neural network modelling offers no worthwhile improvements in comparison to the application of weighted linear transfer functions for capturing the non-linear nature of hydrological relationships. The potential of an artificial neural network to perform simple non-linear hydrological transformations under controlled conditions is examined in this paper. Eight neural network models were developed: four full or partial emulations of a recognised non-linear hydrological rainfall-runoff model; four solutions developed on an identical set of inputs and a calculated runoff coefficient output. The use of different input combinations enabled the competencies of solutions developed on a reduced number of parameters to be assessed. The selected hydrological model had a limited number of inputs and contained no temporal component. The modelling process was based on a set of random inputs that had a uniform distribution and spanned a modest range of possibilities. The initial cloning operations permitted a direct comparison to be performed with the equation-based relationship. It also provided more general information about the power of a neural network to replicate mathematical equations and model modest non-linear relationships. The second group of experiments explored a different relationship that is of hydrological interest; the target surface contained a stronger set of non-linear properties and was more challenging. Linear modelling comparisons were performed against traditional least squares multiple linear regression solutions developed on identical datasets. The reported results demonstrate that neural networks are capable of modelling non-linear hydrological processes and are therefore appropriate tools for hydrological modelling.
ENSO Diversity in Climate Models: A Linear Inverse Modeling Approach
NASA Astrophysics Data System (ADS)
Capotondi, A.; Sardeshmukh, P. D.
2013-12-01
As emphasized in a large recent literature, ENSO events differ in the longitudinal location of the largest sea surface temperature (SST) anomalies along the equator. These differences in peak longitude are associated with different atmospheric teleconnections and global-scale impacts, whose large societal relevance makes it very important to understand the origin and predictability of the various ENSO 'flavors'. In this study we use Linear Inverse Modeling (LIM) to examine ENSO diversity in a 1000-year pre-industrial control integration of the National Center for Atmospheric Research (NCAR) Community Climate System Model version 4 (CCSM4). We choose a pre-industrial control integration for its multi-century duration, and also to examine ENSO diversity in the context of natural variability. The NCAR-CCSM4 has relatively realistic ENSO variability, and a rich spectrum of ENSO diversity, and is thus well suited for studying the origin of ENSO flavors. In particular, the relative frequency of events peaking in the eastern and central equatorial Pacific ('EP' versus 'CP') undergoes inter-decadal modulations in this 1000-yr run. By constructing separate LIMs for the EP and CP epochs, as well as for the entire simulation, we examine to what extent the dominance of a specific ENSO flavor can be attributed to changes in the system dynamics (i.e in the LIM's linear operator) or is merely due to noise. Results from this study provide insights into the predictability of different ENSO types, establish a baseline for assessing ENSO changes due to global warming, and help define new dynamically meaningful ENSO metrics for evaluating climate models.
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
Derivation and definition of a linear aircraft model
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1988-01-01
A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
NASA Astrophysics Data System (ADS)
Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long
2016-01-01
The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.
A linear algebra model for quasispecies
NASA Astrophysics Data System (ADS)
García-Pelayo, Ricardo
2002-06-01
In the present work we present a simple model of the population genetics of quasispecies. We show that the error catastrophe arises because in Biology the mutation rates are almost zero and the mutations themselves are almost neutral. We obtain and discuss previously known results from the point of view of this model. New results are: the fitness of a sequence in terms of its abundance in the quasispecies, a formula for the stable distribution of a quasispecies in which the fitness depends only on the Hamming distance to the master sequence, the time it takes the master sequence to generate a stable quasispecies (such as in the infection by a virus) and the fitness of quasispecies.
Modeling of linear viscoelastic space structures
NASA Astrophysics Data System (ADS)
McTavish, D. J.; Hughes, P. C.
1993-01-01
The GHM Method provides viscoelastic finite elements derived from the commonly used elastic finite elements. Moreover, these GHM elements are used directly and conveniently in second-order structural models just like their elastic counterparts. The forms of the GHM element matrices preserve the definiteness properties usually associated with finite element matrices (the mass matrix is positive definite, the stiffness matrix is nonnegative definite, and the damping matrix is positive semidefinite). In the Laplace domain, material properties are modeled phenomenologically as a sum of second-order rational functions dubbed 'minioscillator' terms. Developed originally as a tool for the analysis of damping in large flexible space structures, the GHM method is applicable to any structure which incorporates viscoelastic materials.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Bond models in linear and nonlinear optics
NASA Astrophysics Data System (ADS)
Aspnes, D. E.
2015-08-01
Bond models, also known as polarizable-point or mechanical models, have a long history in optics, starting with the Clausius-Mossotti relation but more accurately originating with Ewald's largely forgotten work in 1912. These models describe macroscopic phenomena such as dielectric functions and nonlinear-optical (NLO) susceptibilities in terms of the physics that takes place in real space, in real time, on the atomic scale. Their strengths lie in the insights that they provide and the questions that they raise, aspects that are often obscured by quantum-mechanical treatments. Statics versions were used extensively in the late 1960's and early 1970's to correlate NLO susceptibilities among bulk materials. Interest in NLO applications revived with the 2002 work of Powell et al., who showed that a fully anisotropic version reduced by more than a factor of 2 the relatively large number of parameters necessary to describe secondharmonic- generation (SHG) data for Si(111)/SiO2 interfaces. Attention now is focused on the exact physical meaning of these parameters, and to the extent that they represent actual physical quantities.
Failure of Tube Models to Predict the Linear Rheology of Star/Linear Blends
NASA Astrophysics Data System (ADS)
Hall, Ryan; Desai, Priyanka; Kang, Beomgoo; Katzarova, Maria; Huang, Qifan; Lee, Sanghoon; Chang, Taihyun; Venerus, David; Mays, Jimmy; Schieber, Jay; Larson, Ronald
We compare predictions of two of the most advanced versions of the tube model, namely the Hierarchical model by Wang et al. (J. Rheol. 54:223, 2010) and the BOB (branch-on-branch) model by Das et al. (J. Rheol. 50:207-234, 2006), against linear viscoelastic data on blends of monodisperse star and monodisperse linear polybutadiene polymers. The star was carefully synthesized/characterized by temperature gradient interaction chromatography, and rheological data in the high frequency region were obtained through time-temperature superposition. We found massive failures of both the Hierarchical and BOB models to predict the terminal relaxation behavior of the star/linear blends, despite their success in predicting the rheology of the pure star and pure linear. This failure occurred regardless of the choices made concerning constraint release, such as assuming arm retraction in fat or skinny tubes, or allowing for disentanglement relaxation to cut off the constraint release Rouse process at long times. The failures call into question whether constraint release can be described as a combination of constraint release Rouse processes and dynamic tube dilation within a canonical tube model of entanglement interactions.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
An insight into linear quarter car model accuracy
NASA Astrophysics Data System (ADS)
Maher, Damien; Young, Paul
2011-03-01
The linear quarter car model is the most widely used suspension system model. A number of authors expressed doubts about the accuracy of the linear quarter car model in predicting the movement of a complex nonlinear suspension system. In this investigation, a quarter car rig, designed to mimic the popular MacPherson strut suspension system, is subject to narrowband excitation at a range of frequencies using a motor driven cam. Linear and nonlinear quarter car simulations of the rig are developed. Both isolated and operational testing techniques are used to characterise the individual suspension system components. Simulations carried out using the linear and nonlinear models are compared to measured data from the suspension test rig at selected excitation frequencies. Results show that the linear quarter car model provides a reasonable approximation of unsprung mass acceleration but significantly overpredicts sprung mass acceleration magnitude. The nonlinear simulation, featuring a trilinear shock absorber model and nonlinear tyre, produces results which are significantly more accurate than linear simulation results. The effect of tyre damping on the nonlinear model is also investigated for narrowband excitation. It is found to reduce the magnitude of unsprung mass acceleration peaks and contribute to an overall improvement in simulation accuracy.
The General Linear Model and Direct Standardization: A Comparison.
ERIC Educational Resources Information Center
Little, Roderick J. A.; Pullum, Thomas W.
1979-01-01
Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)
Hierarchical Linear Modeling in Salary-Equity Studies.
ERIC Educational Resources Information Center
Loeb, Jane W.
2003-01-01
Provides information on how hierarchical linear modeling can be used as an alternative to multiple regression analysis for conducting salary-equity studies. Salary data are used to compare and contrast the two approaches. (EV)
Dilatonic non-linear sigma models and Ricci flow extensions
NASA Astrophysics Data System (ADS)
Carfora, M.; Marzuoli, A.
2016-09-01
We review our recent work describing, in terms of the Wasserstein geometry over the space of probability measures, the embedding of the Ricci flow in the renormalization group flow for dilatonic non-linear sigma models.
Linear and non-linear chemometric modeling of THM formation in Barcelona's water treatment plant.
Platikanov, Stefan; Martín, Jordi; Tauler, Romà
2012-08-15
The complex behavior observed for the dependence of trihalomethane formation on forty one water treatment plant (WTP) operational variables is investigated by means of linear and non-linear regression methods, including kernel-partial least squares (K-PLS), and support vector machine regression (SVR). Lower prediction errors of total trihalomethane concentrations (lower than 14% for external validation samples) were obtained when these two methods were applied in comparison to when linear regression methods were applied. A new visualization technique revealed the complex nonlinear relationships among the operational variables and displayed the existing correlations between input variables and the kernel matrix on one side and the support vectors on the other side. Whereas some water treatment plant variables like river water TOC and chloride concentrations, and breakpoint chlorination were not considered to be significant due to the multi-collinear effect in straight linear regression modeling methods, they were now confirmed to be significant using K-PLS and SVR non-linear modeling regression methods, proving the better performance of these methods for the prediction of complex formation of trihalomethanes in water disinfection plants. Copyright © 2012 Elsevier B.V. All rights reserved.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
A Probabilistic Risk Mitigation Model for Cyber-Attacks to PMU Networks
Mousavian, Seyedamirabbas; Valenzuela, Jorge; Wang, Jianhui
2015-01-01
The power grid is becoming more dependent on information and communication technologies. Complex networks of advanced sensors such as phasor measurement units (PMUs) are used to collect real time data to improve the observability of the power system. Recent studies have shown that the power grid has significant cyber vulnerabilities which could increase when PMUs are used extensively. Therefore, recognizing and responding to vulnerabilities are critical to the security of the power grid. This paper proposes a risk mitigation model for optimal response to cyber-attacks to PMU networks. We model the optimal response action as a mixed integer linear programming (MILP) problem to prevent propagation of the cyber-attacks and maintain the observability of the power system.
Modeling Compton Scattering in the Linear Regime
NASA Astrophysics Data System (ADS)
Kelmar, Rebeka
2016-09-01
Compton scattering is the collision of photons and electrons. This collision causes the photons to be scattered with increased energy and therefore can produce high-energy photons. These high-energy photons can be used in many other fields including phase contrast medical imaging and x-ray structure determination. Compton scattering is currently well understood for low-energy collisions; however, in order to accurately compute spectra of backscattered photons at higher energies relativistic considerations must be included in the calculations. The focus of this work is to adapt a current program for calculating Compton backscattered radiation spectra to improve its efficiency. This was done by first translating the program from Matlab to python. The next step was to implement a more efficient adaptive integration to replace the trapezoidal method. A new program was produced that operates at less than a half of the speed of the original. This is important because it allows for quicker analysis, and sets the stage for further optimization. The programs were developed using just one particle, while in reality there are thousands of particles involved in these collisions. This means that a more efficient program is essential to running these simulations. The development of this new and efficient program will lead to accurate modeling of Compton sources as well as their improved performance.
Error control of iterative linear solvers for integrated groundwater models.
Dixon, Matthew F; Bai, Zhaojun; Brush, Charles F; Chung, Francis I; Dogrul, Emin C; Kadir, Tariq N
2011-01-01
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models, which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of "forward error bound estimation" to explain the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed by the US Geological Survey and the California State Department of Water Resources, we observe that this error bound guides the choice of a practical measure for controlling the error in linear systems. We implemented a preconditioned GMRES algorithm and benchmarked it against the Successive Over-Relaxation (SOR) method, the most widely known iterative solver for nonsymmetric coefficient matrices. With forward error control, GMRES can easily replace the SOR method in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7.74×. This research is expected to broadly impact groundwater modelers through the demonstration of a practical and general approach for setting the residual tolerance in line with the solution error tolerance and presentation of GMRES performance benchmarking results.
Hierarchical Generalized Linear Models for the Analysis of Judge Ratings
ERIC Educational Resources Information Center
Muckle, Timothy J.; Karabatsos, George
2009-01-01
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
Assessing Developmental Hypotheses with Cross Classified Data: Log Linear Models.
ERIC Educational Resources Information Center
Lehrer, Richard
Log linear models are proposed for the analysis of structural relations among multidimensional developmental contingency tables. Model of quasi-independence are suggested for testing specific hypothesized patterns of development. Transitions in developmental categorizations are described by Markov models applied to successive contingency tables. A…
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
The Use of the Linear Mixed Model in Human Genetics.
Dandine-Roulland, Claire; Perdry, Hervé
2015-01-01
We give a short but detailed review of the methods used to deal with linear mixed models (restricted likelihood, AIREML algorithm, best linear unbiased predictors, etc.), with a few original points. Then we describe three common applications of the linear mixed model in contemporary human genetics: association testing (pathways analysis or rare variants association tests), genomic heritability estimates, and correction for population stratification in genome-wide association studies. We also consider the performance of best linear unbiased predictors for prediction in this context, through a simulation study for rare variants in a short genomic region, and through a short theoretical development for genome-wide data. For each of these applications, we discuss the relevance and the impact of modeling genetic effects as random effects. © 2016 S. Karger AG, Basel.
Phase II monitoring of auto-correlated linear profiles using linear mixed model
NASA Astrophysics Data System (ADS)
Narvand, A.; Soleimani, P.; Raissi, Sadigh
2013-05-01
In many circumstances, the quality of a process or product is best characterized by a given mathematical function between a response variable and one or more explanatory variables that is typically referred to as profile. There are some investigations to monitor auto-correlated linear and nonlinear profiles in recent years. In the present paper, we use the linear mixed models to account autocorrelation within observations which is gathered on phase II of the monitoring process. We undertake that the structure of correlated linear profiles simultaneously has both random and fixed effects. The work enhanced a Hotelling's T 2 statistic, a multivariate exponential weighted moving average (MEWMA), and a multivariate cumulative sum (MCUSUM) control charts to monitor process. We also compared their performances, in terms of average run length criterion, and designated that the proposed control charts schemes could effectively act in detecting shifts in process parameters. Finally, the results are applied on a real case study in an agricultural field.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
A supply chain network design model for biomass co-firing in coal-fired power plants
Md. S. Roni; Sandra D. Eksioglu; Erin Searcy; Krishna Jha
2014-01-01
We propose a framework for designing the supply chain network for biomass co-firing in coal-fired power plants. This framework is inspired by existing practices with products with similar physical characteristics to biomass. We present a hub-and-spoke supply chain network design model for long-haul delivery of biomass. This model is a mixed integer linear program solved using benders decomposition algorithm. Numerical analysis indicates that 100 million tons of biomass are located within 75 miles from a coal plant and could be delivered at $8.53/dry-ton; 60 million tons of biomass are located beyond 75 miles and could be delivered at $36/dry-ton.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Computer modeling of batteries from non-linear circuit elements
NASA Technical Reports Server (NTRS)
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Confirming the Lanchestrian linear-logarithmic model of attrition
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.
Non-Linear Finite Element Modeling of THUNDER Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Taleghani, Barmac K.; Campbell, Joel F.
1999-01-01
A NASTRAN non-linear finite element model has been developed for predicting the dome heights of THUNDER (THin Layer UNimorph Ferroelectric DrivER) piezoelectric actuators. To analytically validate the finite element model, a comparison was made with a non-linear plate solution using Von Karmen's approximation. A 500 volt input was used to examine the actuator deformation. The NASTRAN finite element model was also compared with experimental results. Four groups of specimens were fabricated and tested. Four different input voltages, which included 120, 160, 200, and 240 Vp-p with a 0 volts offset, were used for this comparison.
Dynamic modeling of electrochemical systems using linear graph theory
NASA Astrophysics Data System (ADS)
Dao, Thanh-Son; McPhee, John
An electrochemical cell is a multidisciplinary system which involves complex chemical, electrical, and thermodynamical processes. The primary objective of this paper is to develop a linear graph-theoretical modeling for the dynamic description of electrochemical systems through the representation of the system topologies. After a brief introduction to the topic and a review of linear graphs, an approach to develop linear graphs for electrochemical systems using a circuitry representation is discussed, followed in turn by the use of the branch and chord transformation techniques to generate final dynamic equations governing the system. As an example, the application of linear graph theory to modeling a nickel metal hydride (NiMH) battery will be presented. Results show that not only the number of equations are reduced significantly, but also the linear graph model simulates faster compared to the original lumped parameter model. The approach presented in this paper can be extended to modeling complex systems such as an electric or hybrid electric vehicle where a battery pack is interconnected with other components in many different domains.
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Johnson-Neyman Type Technique in Hierarchical Linear Model.
ERIC Educational Resources Information Center
Miyazaki, Yasuo
One of the innovative approaches in the use of hierarchical linear models (HLM) is to use HLM for Slopes as Outcomes models. This implies that the researcher considers that the regression slopes vary from cluster to cluster randomly as well as systematically with certain covariates at the cluster level. Among the covariates, group indicator…
Application Scenarios for Nonstandard Log-Linear Models
ERIC Educational Resources Information Center
Mair, Patrick; von Eye, Alexander
2007-01-01
In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Heuristic and Linear Models of Judgment: Matching Rules and Environments
ERIC Educational Resources Information Center
Hogarth, Robin M.; Karelaia, Natalia
2007-01-01
Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to…
Locally Dependent Linear Logistic Test Model with Person Covariates
ERIC Educational Resources Information Center
Ip, Edward H.; Smits, Dirk J. M.; De Boeck, Paul
2009-01-01
The article proposes a family of item-response models that allow the separate and independent specification of three orthogonal components: item attribute, person covariate, and local item dependence. Special interest lies in extending the linear logistic test model, which is commonly used to measure item attributes, to tests with embedded item…
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Application Scenarios for Nonstandard Log-Linear Models
ERIC Educational Resources Information Center
Mair, Patrick; von Eye, Alexander
2007-01-01
In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard…
Neural Network Hydrological Modelling: Linear Output Activation Functions?
NASA Astrophysics Data System (ADS)
Abrahart, R. J.; Dawson, C. W.
2005-12-01
The power to represent non-linear hydrological processes is of paramount importance in neural network hydrological modelling operations. The accepted wisdom requires non-polynomial activation functions to be incorporated in the hidden units such that a single tier of hidden units can thereafter be used to provide a 'universal approximation' to whatever particular hydrological mechanism or function is of interest to the modeller. The user can select from a set of default activation functions, or in certain software packages, is able to define their own function - the most popular options being logistic, sigmoid and hyperbolic tangent. If a unit does not transform its inputs it is said to possess a 'linear activation function' and a combination of linear activation functions will produce a linear solution; whereas the use of non-linear activation functions will produce non-linear solutions in which the principle of superposition does not hold. For hidden units, speed of learning and network complexities are important issues. For the output units, it is desirable to select an activation function that is suited to the distribution of the target values: e.g. binary targets (logistic); categorical targets (softmax); continuous-valued targets with a bounded range (logistic / tanh); positive target values with no known upper bound (exponential; but beware of overflow); continuous-valued targets with no known bounds (linear). It is also standard practice in most hydrological applications to use the default software settings and to insert a set of identical non-linear activation functions in the hidden layer and output layer processing units. Mixed combinations have nevertheless been reported in several hydrological modelling papers and the full ramifications of such activities requires further investigation and assessment i.e. non-linear activation functions in the hidden units connected to linear or clipped-linear activation functions in the output unit. There are two
Use of a linearization approximation facilitating stochastic model building.
Svensson, Elin M; Karlsson, Mats O
2014-04-01
The objective of this work was to facilitate the development of nonlinear mixed effects models by establishing a diagnostic method for evaluation of stochastic model components. The random effects investigated were between subject, between occasion and residual variability. The method was based on a first-order conditional estimates linear approximation and evaluated on three real datasets with previously developed population pharmacokinetic models. The results were assessed based on the agreement in difference in objective function value between a basic model and extended models for the standard nonlinear and linearized approach respectively. The linearization was found to accurately identify significant extensions of the model's stochastic components with notably decreased runtimes as compared to the standard nonlinear analysis. The observed gain in runtimes varied between four to more than 50-fold and the largest gains were seen for models with originally long runtimes. This method may be especially useful as a screening tool to detect correlations between random effects since it substantially quickens the estimation of large variance-covariance blocks. To expedite the application of this diagnostic tool, the linearization procedure has been automated and implemented in the software package PsN.
Generalized linear mixed models for meta-analysis.
Platt, R W; Leroux, B G; Breslow, N
1999-03-30
We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations on simple benchmark problems show a sharp reduction in error relative to the LPS model.
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
PID controller design for trailer suspension based on linear model
NASA Astrophysics Data System (ADS)
Kushairi, S.; Omar, A. R.; Schmidt, R.; Isa, A. A. Mat; Hudha, K.; Azizan, M. A.
2015-05-01
A quarter of an active trailer suspension system having the characteristics of a double wishbone type was modeled as a complex multi-body dynamic system in MSC.ADAMS. Due to the complexity of the model, a linearized version is considered in this paper. A model reduction technique is applied to the linear model, resulting in a reduced-order model. Based on this simplified model, a Proportional-Integral-Derivative (PID) controller was designed in MATLAB/Simulink environment; primarily to reduce excessive roll motions and thus improving the ride comfort. Simulation results show that the output signal closely imitates the input signal in multiple cases - demonstrating the effectiveness of the controller.
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.
Functional Linear Models for Association Analysis of Quantitative Traits
Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao
2014-01-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119
A non linear analytical model of switched reluctance machines
NASA Astrophysics Data System (ADS)
Sofiane, Y.; Tounzi, A.; Piriou, F.
2002-06-01
Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.
Multikernel linear mixed models for complex phenotype prediction
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-01-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
Piecewise linear and Boolean models of chemical reaction networks
Veliz-Cuba, Alan; Kumar, Ajit; Josić, Krešimir
2014-01-01
Models of biochemical networks are frequently complex and high-dimensional. Reduction methods that preserve important dynamical properties are therefore essential for their study. Interactions in biochemical networks are frequently modeled using Hill functions (xn/(Jn + xn)). Reduced ODEs and Boolean approximations of such model networks have been studied extensively when the exponent n is large. However, while the case of small constant J appears in practice, it is not well understood. We provide a mathematical analysis of this limit, and show that a reduction to a set of piecewise linear ODEs and Boolean networks can be mathematically justified. The piecewise linear systems have closed form solutions that closely track those of the fully nonlinear model. The simpler, Boolean network can be used to study the qualitative behavior of the original system. We justify the reduction using geometric singular perturbation theory and compact convergence, and illustrate the results in network models of a toggle switch and an oscillator. PMID:25412739
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
Johnson-Neyman Type Technique in Hierarchical Linear Models
ERIC Educational Resources Information Center
Miyazaki, Yasuo; Maier, Kimberly S.
2005-01-01
In hierarchical linear models we often find that group indicator variables at the cluster level are significant predictors for the regression slopes. When this is the case, the average relationship between the outcome and a key independent variable are different from group to group. In these settings, a question such as "what range of the…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Use of Linear Models for Thermal Processing Acidified Foods
USDA-ARS?s Scientific Manuscript database
Acidified vegetable products with a pH above 3.3 must be pasteurized to assure the destruction of acid resistant pathogenic bacteria. The times and temperatures needed to assure a five log reduction by pasteurization have previously been determined using a non-linear (Weibull) model. Recently, the F...
Mathematical modelling and linear stability analysis of laser fusion cutting
Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich
2016-06-08
A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.
Non-linear duality invariant partially massless models?
Cherney, D.; Deser, S.; Waldron, A.; ...
2015-12-15
We present manifestly duality invariant, non-linear, equations of motion for maximal depth, partially massless higher spins. These are based on a first order, Maxwell-like formulation of the known partially massless systems. Lastly, our models mimic Dirac–Born–Infeld theory but it is unclear whether they are Lagrangian.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Asymptotic behavior of coupled linear systems modeling suspension bridges
NASA Astrophysics Data System (ADS)
Dell'Oro, Filippo; Giorgi, Claudio; Pata, Vittorino
2015-06-01
We consider the coupled linear system describing the vibrations of a string-beam system related to the well-known Lazer-McKenna suspension bridge model. For ɛ > 0 and k > 0, the decay properties of the solution semigroup are discussed in dependence of the nonnegative parameters γ and h, which are responsible for the damping effects.
A Methodology and Linear Model for System Planning and Evaluation.
ERIC Educational Resources Information Center
Meyer, Richard W.
1982-01-01
The two-phase effort at Clemson University to design a comprehensive library automation program is reported. Phase one was based on a version of IBM's business system planning methodology, and the second was based on a linear model designed to compare existing program systems to the phase one design. (MLW)
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.
Kumar, Gaurav; Bajaj, Rakesh Kumar
2014-01-01
In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.
Monitoring acute effects on athletic performance with mixed linear modeling.
Vandenbogaerde, Tom J; Hopkins, Will G
2010-07-01
There is a need for a sophisticated approach to track athletic performance and to quantify factors affecting it in practical settings. To demonstrate the application of mixed linear modeling for monitoring athletic performance. Elite sprint and middle-distance swimmers (three females and six males; aged 21-26 yr) performed 6-13 time trials in training and competition in the 9 wk before and including Olympic-qualifying trials, all in their specialty event. We included a double-blind, randomized, diet-controlled crossover intervention, in which the swimmers consumed caffeine (5 mg x kg(-1) body mass) or placebo. The swimmers also knowingly consumed varying doses of caffeine in some time trials. We used mixed linear modeling of log-transformed swim time to quantify effects on performance in training versus competition, in morning versus evening swims, and with use of caffeine. Predictor variables were coded as 0 or 1 to represent absence or presence, respectively, of each condition and were included as fixed effects. The date of each performance test was included as a continuous linear fixed effect and interacted with the random effect for the athlete to represent individual differences in linear trends in performance. Most effects were clear, owing to the high reliability of performance times in training and competition (typical errors of 0.9% and 0.8%, respectively). Performance time improved linearly by 0.8% per 4 wk. The swimmers performed substantially better in evenings versus mornings and in competition versus training. A 100-mg dose of caffeine enhanced performance in training and competition by approximately 1.3%. There were substantial but unclear individual responses to training and caffeine (SD of 0.3% and 0.8%, respectively). Mixed linear modeling can be applied successfully to monitor factors affecting performance in a squad of elite athletes.
Linear Sigma Model Toolshed for D-brane Physics
Hellerman, Simeon
2001-08-23
Building on earlier work, we construct linear sigma models for strings on curved spaces in the presence of branes. Our models include an extremely general class of brane-worldvolume gauge field configurations. We explain in an accessible manner the mathematical ideas which suggest appropriate worldsheet interactions for generating a given open string background. This construction provides an explanation for the appearance of the derived category in D-brane physic complementary to that of recent work of Douglas.
Linear Time Invariant Models for Integrated Flight and Rotor Control
NASA Astrophysics Data System (ADS)
Olcer, Fahri Ersel
2011-12-01
Recent developments on individual blade control (IBC) and physics based reduced order models of various on-blade control (OBC) actuation concepts are opening up opportunities to explore innovative rotor control strategies for improved rotor aerodynamic performance, reduced vibration and BVI noise, and improved rotor stability, etc. Further, recent developments in computationally efficient algorithms for the extraction of Linear Time Invariant (LTI) models are providing a convenient framework for exploring integrated flight and rotor control, while accounting for the important couplings that exist between body and low frequency rotor response and high frequency rotor response. Formulation of linear time invariant (LTI) models of a nonlinear system about a periodic equilibrium using the harmonic domain representation of LTI model states has been studied in the literature. This thesis presents an alternative method and a computationally efficient scheme for implementation of the developed method for extraction of linear time invariant (LTI) models from a helicopter nonlinear model in forward flight. The fidelity of the extracted LTI models is evaluated using response comparisons between the extracted LTI models and the nonlinear model in both time and frequency domains. Moreover, the fidelity of stability properties is studied through the eigenvalue and eigenvector comparisons between LTI and LTP models by making use of the Floquet Transition Matrix. For time domain evaluations, individual blade control (IBC) and On-Blade Control (OBC) inputs that have been tried in the literature for vibration and noise control studies are used. For frequency domain evaluations, frequency sweep inputs are used to obtain frequency responses of fixed system hub loads to a single blade IBC input. The evaluation results demonstrate the fidelity of the extracted LTI models, and thus, establish the validity of the LTI model extraction process for use in integrated flight and rotor control
A Derivation of Linearized Griffith Energies from Nonlinear Models
NASA Astrophysics Data System (ADS)
Friedrich, Manuel
2017-07-01
We derive Griffith functionals in the framework of linearized elasticity from nonlinear and frame indifferent energies in a brittle fracture via {Γ}-convergence. The convergence is given in terms of rescaled displacement fields measuring the distance of deformations from piecewise rigid motions. The configurations of the limiting model consist of partitions of the material, corresponding piecewise rigid deformations and displacement fields which are defined separately on each component of the cracked body. Apart from the linearized Griffith energy the limiting functional also comprises the segmentation energy, which is necessary to disconnect the parts of the specimen.
Linearized flexibility models in multibody dynamics and control
NASA Technical Reports Server (NTRS)
Cimino, William W.
1989-01-01
Simulation of structural response of multi-flexible-body systems by linearized flexible motion combined with nonlinear rigid motion is discussed. Advantages and applicability of such an approach for accurate simulation with greatly reduced computational costs and turnaround times are described, restricting attention to the control design environment. Requirements for updating the linearized flexibility model to track large angular motions are discussed. Validation of such an approach by comparison with other existing codes is included. Application to a flexible robot manipulator system is described.
Switched linear model predictive controllers for periodic exogenous signals
NASA Astrophysics Data System (ADS)
Wang, Liuping; Gawthrop, Peter; Owens, David. H.; Rogers, Eric
2010-04-01
This article develops switched linear controllers for periodic exogenous signals using the framework of a continuous-time model predictive control. In this framework, the control signal is generated by an algorithm that uses receding horizon control principle with an on-line optimisation scheme that permits inclusion of operational constraints. Unlike traditional repetitive controllers, applying this method in the form of switched linear controllers ensures bumpless transfer from one controller to another. Simulation studies are included to demonstrate the efficacy of the design with or without hard constraints.
Linear modeling of steady-state behavioral dynamics.
Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert
2002-01-01
The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782
Solving linear integer programming problems by a novel neural model.
Cavalieri, S
1999-02-01
The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.
Validation of a non-linear model of health.
Topolski, Stefan; Sturmberg, Joachim
2014-12-01
The purpose of this study was to evaluate the veracity of a theoretically derived model of health that describes a non-linear trajectory of health from birth to death with available population data sets. The distribution of mortality by age is directly related to health at that age, thus health approximates 1/mortality. The inverse of available all-cause mortality data from various time periods and populations was used as proxy data to compare with the theoretically derived non-linear health model predictions, using both qualitative approaches and quantitative one-sample Kolmogorov-Smirnov analysis with Monte Carlo simulation. The mortality data's inverse resembles a log-normal distribution as predicted by the proposed health model. The curves have identical slopes from birth and follow a logarithmic decline from peak health in young adulthood. A majority of the sampled populations had a good to excellent quantitative fit to a log-normal distribution, supporting the underlying model assumptions. Post hoc manipulation showed the model predictions to be stable. This is a first theory of health to be validated by proxy data, namely the inverse of all-cause mortality. This non-linear model, derived from the notion of the interaction of physical, environmental, mental, emotional, social and sense-making domains of health, gives physicians a more rigorous basis to direct health care services and resources away from disease-focused elder care towards broad-based biopsychosocial interventions earlier in life. © 2014 John Wiley & Sons, Ltd.
Attracted to de Sitter: cosmology of the linear Horndeski models
Martín-Moruno, Prado; Nunes, Nelson J.; Lobo, Francisco S.N. E-mail: njnunes@fc.ul.pt
2015-05-01
We consider Horndeski cosmological models, with a minisuperspace Lagrangian linear in the field derivative, that are able to screen any vacuum energy and material content leading to a spatially flat de Sitter vacuum fixed by the theory itself. Furthermore, we investigate particular models with a cosmic evolution independent of the material content and use them to understand the general characteristics of this framework. We also consider more realistic models, which we denote the ''term-by-term'' and ''tripod'' models, focusing attention on cases in which the critical point is indeed an attractor solution and the cosmological history is of particular interest.
Can the Non-linear Ballooning Model describe ELMs?
NASA Astrophysics Data System (ADS)
Henneberg, S. A.; Cowley, S. C.; Wilson, H. R.
2015-11-01
The explosive, filamentary plasma eruptions described by the non-linear ideal MHD ballooning model is tested quantitatively against experimental observations of ELMs in MAST. The equations describing this model were derived by Wilson and Cowley for tokamak-like geometry which includes two differential equations: the linear ballooning equation which describes the spatial distribution along the field lines and the non-linear ballooning mode envelope equation, which is a two-dimensional, non-linear differential equation which can involve fractional temporal-derivatives, but is often second-order in time and space. To employ the second differential equation for a specific geometry one has to evaluate the coefficients of the equation which is non-trivial as it involves field line averaging of slowly converging functions. We have solved this system for MAST, superimposing the solutions of both differential equations and mapping them onto a MAST plasma. Comparisons with the evolution of ELM filaments in MAST will be reported in order to test the model. The support of the EPSRC for the FCDT (Grant EP/K504178/1), of Euratom research and training programme 2014-2018 (No 633053) and of the RCUK Energy Programme [grant number EP/I501045] is gratefully acknowledged.
A simplified approach to quasi-linear viscoelastic modeling.
Nekouzadeh, Ali; Pryse, Kenneth M; Elson, Elliot L; Genin, Guy M
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in 1-D is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of "ramp-and-hold" stretching tests were applied to rectangular collagen specimens. The relaxation force data from the "hold" was used to calibrate a new "adaptive QLV model" and several models from literature, and the force data from the "ramp" was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The "adaptive QLV model" based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation.
Modeling pan evaporation for Kuwait by multiple linear regression.
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values.
Scalar mesons in three-flavor linear sigma models
Deirdre Black; Amir H. Fariborz; Sherif Moussa; Salah Nasri; Joseph Schrechter
2001-09-01
The three flavor linear sigma model is studied in order to understand the role of possible light scalar mesons in the pi-pi, pi-K and pi-eta elastic scattering channels. The K-matrix prescription is used to unitarize tree-level amplitudes and, with a sufficiently general model, we obtain reasonable ts to the experimental data. The effect of unitarization is very important and leads to the emergence of a nonet of light scalars, with masses below 1 GeV. We compare with a scattering treatment using a more general non-linear sigma model approach and also comment upon how our results t in with the scalar meson puzzle. The latter involves a preliminary investigation of possible mixing between scalar nonets.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
A simplified approach to quasi-linear viscoelastic modeling
Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254
Classifying linearly shielded modified gravity models in effective field theory.
Lombriser, Lucas; Taylor, Andy
2015-01-23
We study the model space generated by the time-dependent operator coefficients in the effective field theory of the cosmological background evolution and perturbations of modified gravity and dark energy models. We identify three classes of modified gravity models that reduce to Newtonian gravity on the small scales of linear theory. These general classes contain enough freedom to simultaneously admit a matching of the concordance model background expansion history. In particular, there exists a large model space that mimics the concordance model on all linear quasistatic subhorizon scales as well as in the background evolution. Such models also exist when restricting the theory space to operators introduced in Horndeski scalar-tensor gravity. We emphasize that whereas the partially shielded scenarios might be of interest to study in connection with tensions between large and small scale data, with conventional cosmological probes, the ability to distinguish the fully shielded scenarios from the concordance model on near-horizon scales will remain limited by cosmic variance. Novel tests of the large-scale structure remedying this deficiency and accounting for the full covariant nature of the alternative gravitational theories, however, might yield further insights on gravity in this regime.
Modeling of thermal storage systems in MILP distributed energy resource models
Steen, David; Stadler, Michael; Cardoso, Gonçalo; Groissböck, Markus; DeForest, Nicholas; Marnay, Chris
2014-08-04
Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO_{2} emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculations are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.
Disorder and Quantum Chromodynamics -- Non-Linear σ Models
NASA Astrophysics Data System (ADS)
Guhr, Thomas; Wilke, Thomas
2001-10-01
The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.
Disorder and Quantum Chromodynamics - Non-Linear σ Models
NASA Astrophysics Data System (ADS)
Guhr, Thomas; Wilke, Thomas
The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.
Residuals analysis of the generalized linear models for longitudinal data.
Chang, Y C
2000-05-30
The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.
Mining Knowledge from Multiple Criteria Linear Programming Models
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhu, Xingquan; Li, Aihua; Zhang, Lingling; Shi, Yong
As a promising data mining tool, Multiple Criteria Linear Programming (MCLP) has been widely used in business intelligence. However, a possible limitation of MCLP is that it generates unexplainable black-box models which can only tell us results without reasons. To overcome this shortage, in this paper, we propose a Knowledge Mining strategy which mines from black-box MCLP models to get explainable and understandable knowledge. Different from the traditional Data Mining strategy which focuses on mining knowledge from data, this Knowledge Mining strategy provides a new vision of mining knowledge from black-box models, which can be taken as a special topic of “Intelligent Knowledge Management”.
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
A Linear Stochastic Dynamical Model of ENSO. Part II: Analysis.
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Battisti, D. S.
2001-02-01
In this study the behavior of a linear, intermediate model of ENSO is examined under stochastic forcing. The model was developed in a companion paper (Part I) and is derived from the Zebiak-Cane ENSO model. Four variants of the model are used whose stabilities range from slightly damped to moderately damped. Each model is run as a simulation while being perturbed by noise that is uncorrelated (white) in space and time. The statistics of the model output show the moderately damped models to be more realistic than the slightly damped models. The moderately damped models have power spectra that are quantitatively quite similar to observations, and a seasonal pattern of variance that is qualitatively similar to observations. All models produce ENSOs that are phase locked to the annual cycle, and all display the `spring barrier' characteristic in their autocorrelation patterns, though in the models this `barrier' occurs during the summer and is less intense than in the observations (inclusion of nonlinear effects is shown to partially remedy this deficiency). The more realistic models also show a decadal variability in the lagged autocorrelation pattern that is qualitatively similar to observations.Analysis of the models shows that the greatest part of the variability comes from perturbations that project onto the first singular vector, which then grow rapidly into the ENSO mode. Essentially, the model output represents many instances of the ENSO mode, with random phase and amplitude, stimulated by the noise through the optimal transient growth of the singular vectors.The limit of predictability for each model is calculated and it is shown that the more realistic (moderately damped) models have worse potential predictability (9-15 months) than the deterministic chaotic models that have been studied widely in the literature. The predictability limits are strongly correlated with the stability of the models' ENSO mode-the more highly damped models having much shorter
Modelling hillslope evolution: linear and nonlinear transport relations
NASA Astrophysics Data System (ADS)
Martin, Yvonne
2000-08-01
Many recent models of landscape evolution have used a diffusion relation to simulate hillslope transport. In this study, a linear diffusion equation for slow, quasi-continuous mass movement (e.g., creep), which is based on a large data compilation, is adopted in the hillslope model. Transport relations for rapid, episodic mass movements are based on an extensive data set covering a 40-yr period from the Queen Charlotte Islands, British Columbia. A hyperbolic tangent relation, in which transport increases nonlinearly with gradient above some threshold gradient, provided the best fit to the data. Model runs were undertaken for typical hillslope profiles found in small drainage basins in the Queen Charlotte Islands. Results, based on linear diffusivity values defined in the present study, are compared to results based on diffusivities used in earlier studies. Linear diffusivities, adopted in several earlier studies, generally did not provide adequate approximations of hillslope evolution. The nonlinear transport relation was tested and found to provide acceptable simulations of hillslope evolution. Weathering is introduced into the final set of model runs. The incorporation of weathering into the model decreases the rate of hillslope change when theoretical rates of sediment transport exceed sediment supply. The incorporation of weathering into the model is essential to ensuring that transport rates at high gradients obtained in the model reasonably replicate conditions observed in real landscapes. An outline of landscape progression is proposed based on model results. Hillslope change initially occurs at a rapid rate following events that result in oversteepened gradients (e.g., tectonic forcing, glaciation, fluvial undercutting). Steep gradients are eventually eliminated and hillslope transport is reduced significantly.
Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters
NASA Astrophysics Data System (ADS)
Pegram, Geoff; Sinclair, Scott
A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
Using Quartile-Quartile Lines as Linear Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2015-01-01
This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…
Modelling and Resource Allocation of Linearly Restricted Operating Systems.
1979-12-01
services thus rendered are the products . Clearly, in a limited resource situation, how best to dispense the available resources to achieve some...Preliminary Although a linear programming model for an economic problem had been developed as early as 1939 by the Russian mathematician L. Kantorovich...individual user programs) to achieve productions (computations). The purpose is then to devise a way (A plan) to allocate those available memory spaces
LINEAR MODELS FOR MANAGING SOURCES OF GROUNDWATER POLLUTION.
Gorelick, Steven M.; Gustafson, Sven-Ake; ,
1984-01-01
Mathematical models for the problem of maintaining a specified groundwater quality while permitting solute waste disposal at various facilities distributed over space are discussed. The pollutants are assumed to be chemically inert and their concentrations in the groundwater are governed by linear equations for advection and diffusion. The aim is to determine a disposal policy which maximises the total amount of pollutants released during a fixed time T while meeting the condition that the concentration everywhere is below prescribed levels.
NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS.
TOMAS,R.FISCHER,W.JAIN,A.LUO,Y.PILAT,F.
2004-07-05
For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability.
Using Quartile-Quartile Lines as Linear Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2015-01-01
This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…
Feature Modeling in Underwater Environments Using Sparse Linear Combinations
2010-01-01
waters . Optics Express, 16(13), 2008. 2, 3 [9] J. Jaflfe. Monte carlo modeling of underwate-image forma- tion: Validity of the linear and small-angle... turbid water , etc), we would like to determine if these two images contain the same (or similar) object(s). One approach is as follows: 1. Detect...nearest neighbor methods on extracted feature descriptors This methodology works well for clean, out-of- water images, however, when imaging underwater
Electromagnetic axial anomaly in a generalized linear sigma model
NASA Astrophysics Data System (ADS)
Fariborz, Amir H.; Jora, Renata
2017-06-01
We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Validating a quasi-linear transport model versus nonlinear simulations
NASA Astrophysics Data System (ADS)
Casati, A.; Bourdelle, C.; Garbet, X.; Imbeaux, F.; Candy, J.; Clairet, F.; Dif-Pradalier, G.; Falchetto, G.; Gerbaud, T.; Grandgirard, V.; Gürcan, Ö. D.; Hennequin, P.; Kinsey, J.; Ottaviani, M.; Sabot, R.; Sarazin, Y.; Vermare, L.; Waltz, R. E.
2009-08-01
In order to gain reliable predictions on turbulent fluxes in tokamak plasmas, physics based transport models are required. Nonlinear gyrokinetic electromagnetic simulations for all species are still too costly in terms of computing time. On the other hand, interestingly, the quasi-linear approximation seems to retain the relevant physics for fairly reproducing both experimental results and nonlinear gyrokinetic simulations. Quasi-linear fluxes are made of two parts: (1) the quasi-linear response of the transported quantities and (2) the saturated fluctuating electrostatic potential. The first one is shown to follow well nonlinear numerical predictions; the second one is based on both nonlinear simulations and turbulence measurements. The resulting quasi-linear fluxes computed by QuaLiKiz (Bourdelle et al 2007 Phys. Plasmas 14 112501) are shown to agree with the nonlinear predictions when varying various dimensionless parameters, such as the temperature gradients, the ion to electron temperature ratio, the dimensionless collisionality, the effective charge and ranging from ion temperature gradient to trapped electron modes turbulence.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure
[Linear mixed modeling of branch biomass for Korean pine plantation].
Dong, Li-Hu; Li, Feng-Ri; Jia, Wei-Wei
2013-12-01
Based on the measurement of 3643 branch biomass samples of 60 Korean pine (Pinus koraiensis) trees from Mengjiagang Forest Farm, Heilongjiang Province, all subset regressions techniques were used to develop the branch biomass model (branch, foliage, and total biomass models). The optimal base model of branch biomass was developed as lnw = k1 + k2 lnL(b) + k3 lnD(b). Then, linear mixed models were developed based on PROC MIXED of SAS 9.3 software, and evaluated with AIC, BIC, Log Likelihood and Likelihood ratio tests. The results showed that the foliage and total biomass models with parameters k1, k2 and k3 as mixed effects showed the best performance. The branch biomass model with parameters k5 and k2 as mixed effects showed the best performance. Finally, we evaluated the optimal base model and the mixed model of branch biomass. Model validation confirmed that the mixed model was better than the optimal base model. The mixed model with random parameters could not only provide more accurate and precise prediction, but also showed the individual difference based on variance-covariance structure.
Sahin, Rubina; Tapadia, Kavita
2015-01-01
The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG < 0) and endothermic (ΔH > 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.
On the Development of Parameterized Linear Analytical Longitudinal Airship Models
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.
2008-01-01
In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.
Daily runoff prediction using the linear and non-linear models.
Sharifi, Alireza; Dinpashoh, Yagob; Mirabbasi, Rasoul
2017-08-01
Runoff prediction, as a nonlinear and complex process, is essential for designing canals, water management and planning, flood control and predicting soil erosion. There are a number of techniques for runoff prediction based on the hydro-meteorological and geomorphological variables. In recent years, several soft computing techniques have been developed to predict runoff. There are some challenging issues in runoff modeling including the selection of appropriate inputs and determination of the optimum length of training and testing data sets. In this study, the gamma test (GT), forward selection and factor analysis were used to determine the best input combination. In addition, GT was applied to determine the optimum length of training and testing data sets. Results showed the input combination based on the GT method with five variables has better performance than other combinations. For modeling, among four techniques: artificial neural networks, local linear regression, an adaptive neural-based fuzzy inference system and support vector machine (SVM), results indicated the performance of the SVM model is better than other techniques for runoff prediction in the Amameh watershed.
Modelling human balance using switched systems with linear feedback control
Kowalczyk, Piotr; Glendinning, Paul; Brown, Martin; Medrano-Cerda, Gustavo; Dallali, Houman; Shapiro, Jonathan
2012-01-01
We are interested in understanding the mechanisms behind and the character of the sway motion of healthy human subjects during quiet standing. We assume that a human body can be modelled as a single-link inverted pendulum, and the balance is achieved using linear feedback control. Using these assumptions, we derive a switched model which we then investigate. Stable periodic motions (limit cycles) about an upright position are found. The existence of these limit cycles is studied as a function of system parameters. The exploration of the parameter space leads to the detection of multi-stability and homoclinic bifurcations. PMID:21697168
Non-linear model for compression tests on articular cartilage.
Grillo, Alfio; Guaily, Amr; Giverso, Chiara; Federico, Salvatore
2015-07-01
Hydrated soft tissues, such as articular cartilage, are often modeled as biphasic systems with individually incompressible solid and fluid phases, and biphasic models are employed to fit experimental data in order to determine the mechanical and hydraulic properties of the tissues. Two of the most common experimental setups are confined and unconfined compression. Analytical solutions exist for the unconfined case with the linear, isotropic, homogeneous model of articular cartilage, and for the confined case with the non-linear, isotropic, homogeneous model. The aim of this contribution is to provide an easily implementable numerical tool to determine a solution to the governing differential equations of (homogeneous and isotropic) unconfined and (inhomogeneous and isotropic) confined compression under large deformations. The large-deformation governing equations are reduced to equivalent diffusive equations, which are then solved by means of finite difference (FD) methods. The solution strategy proposed here could be used to generate benchmark tests for validating complex user-defined material models within finite element (FE) implementations, and for determining the tissue's mechanical and hydraulic properties from experimental data.
A note on a model for quay crane scheduling with non-crossing constraints
NASA Astrophysics Data System (ADS)
Santini, Alberto; Alsing Friberg, Henrik; Ropke, Stefan
2015-06-01
This article studies the quay crane scheduling problem with non-crossing constraints, which is an operational problem that arises in container terminals. An enhancement to a mixed integer programming model for the problem is proposed and a new class of valid inequalities is introduced. Computational results show the effectiveness of these enhancements in solving the problem to optimality.
Application of linear gauss pseudospectral method in model predictive control
NASA Astrophysics Data System (ADS)
Yang, Liang; Zhou, Hao; Chen, Wanchun
2014-03-01
This paper presents a model predictive control(MPC) method aimed at solving the nonlinear optimal control problem with hard terminal constraints and quadratic performance index. The method combines the philosophies of the nonlinear approximation model predictive control, linear quadrature optimal control and Gauss Pseudospectral method. The current control is obtained by successively solving linear algebraic equations transferred from the original problem via linearization and the Gauss Pseudospectral method. It is not only of high computational efficiency since it does not need to solve nonlinear programming problem, but also of high accuracy though there are a few discrete points. Therefore, this method is suitable for on-board applications. A design of terminal impact with a specified direction is carried out to evaluate the performance of this method. Augmented PN guidance law in the three-dimensional coordinate system is applied to produce the initial guess. And various cases for target with straight-line movements are employed to demonstrate the applicability in different impact angles. Moreover, performance of the proposed method is also assessed by comparison with other guidance laws. Simulation results indicate that this method is not only of high computational efficiency and accuracy, but also applicable in the framework of guidance design.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Wavefront sensing for WFIRST with a linear optical model
NASA Astrophysics Data System (ADS)
Jurling, Alden S.; Content, David A.
2012-09-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Models of protein linear molecular motors for dynamic nanodevices.
Fulga, Florin; Nicolau, Dan V; Nicolau, Dan V
2009-02-01
Protein molecular motors are natural nano-machines that convert the chemical energy from the hydrolysis of adenosine triphosphate into mechanical work. These efficient machines are central to many biological processes, including cellular motion, muscle contraction and cell division. The remarkable energetic efficiency of the protein molecular motors coupled with their nano-scale has prompted an increasing number of studies focusing on their integration in hybrid micro- and nanodevices, in particular using linear molecular motors. The translation of these tentative devices into technologically and economically feasible ones requires an engineering, design-orientated approach based on a structured formalism, preferably mathematical. This contribution reviews the present state of the art in the modelling of protein linear molecular motors, as relevant to the future design-orientated development of hybrid dynamic nanodevices.
Repopulation Kinetics and the Linear-Quadratic Model
NASA Astrophysics Data System (ADS)
O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.
2009-08-01
The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.
A Linear Stratified Ocean Model of the Equatorial Undercurrent
NASA Astrophysics Data System (ADS)
McCreary, J. P.
1981-01-01
A linear stratified ocean model is used to study the wind-driven response of the equatorial ocean. The model is an extension of the Lighthill (1969) model that allows the diffusion of heat and momentum into the deeper ocean, and so can develop non-trivial steady solutions. To retain the ability to expand solutions into sums of vertical normal modes, mixing coefficients must be inversely proportional to the square of the background Vaisala frequency. The model is also similar to the earlier homogeneous ocean model of Stommel (1960). He extended Ekman dynamics to the equator by allowing his model to generate a barotropic pressure field. The present model differs in that the presence of stratification allows the generation of a baroclinic pressure field as well. The most important result of this paper is that linear theory can produce a realistic equatorial current structure. The model Undercurrent has a reasonable width and depth scale. There is westward flow both above and below the Undercurrent. The meridional circulation conforms to the 'classical' picture suggested by Cromwell (1953). Unlike the Stommel solution, the response here is less sensitive to variations of parameters. Ocean boundaries are not necessary for the existence of the Undercurrent but are necessary for the existence of the deeper Equatorial Intermediate Current. The radiation of equatorially trapped Rossby and Kelvin waves is essential to the development of a realistic Undercurrent. Because the system supports the existence of these waves, low-order vertical modes can very nearly adjust to Sverdrup balance (defined below), which in a bounded ocean and for winds without curl is a state of rest. As a result, higher-order vertical modes are much more visible in the total solution. This property accounts for the surface trapping and narrow width scale of the equatorial currents. The high-order modes tend to be in Yoshida balance (defined below) and generate the characteristic meridional circulation
Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen
2012-06-01
To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
Generalized linear mixed model for segregation distortion analysis.
Zhan, Haimao; Xu, Shizhong
2011-11-11
Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.
Finite Population Correction for Two-Level Hierarchical Linear Models.
Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian
2017-03-16
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Generalized linear mixed model for segregation distortion analysis
2011-01-01
Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575
THE SEPARATION OF URANIUM ISOTOPES BY GASEOUS DIFFUSION: A LINEAR PROGRAMMING MODEL,
URANIUM, ISOTOPE SEPARATION), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), MATHEMATICAL MODELS, GAS FLOW, NUCLEAR REACTORS, OPERATIONS RESEARCH
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
Model predictive control of a combined heat and power plant using local linear models
Kikstra, J.F.; Roffel, B.; Schoen, P.
1998-10-01
Model predictive control has been applied to control of a combined heat and power plant. One of the main features of this plant is that it exhibits nonlinear process behavior due to large throughput swings. In this application, the operating window of the plant has been divided into a number of smaller windows in which the nonlinear process behavior has been approximated by linear behavior. For each operating window, linear step weight models were developed from a detailed nonlinear first principles model, and the model prediction is calculated based on interpolation between these linear models. The model output at each operating point can then be calculated from four basic linear models, and the required control action can subsequently be calculated with the standard model predictive control approach using quadratic programming.
Non Linear Force Free Field Modeling for a Pseudostreamer
NASA Astrophysics Data System (ADS)
Karna, Nishu; Savcheva, Antonia; Gibson, Sarah; Tassev, Svetlin V.
2017-08-01
In this study we present a magnetic configuration of a pseudostreamer observed on April 18, 2015 on southern west limb embedding a filament cavity. We constructed Non Linear Force Free Field (NLFFF) model using the flux rope insertion method. The NLFFF model produces the three-dimensional coronal magnetic field constrained by observed coronal structures and photospheric magnetogram. SDO/HMI magnetogram was used as an input for the model. The high spatial and temporal resolution of the SDO/AIA allows us to select best-fit models that match the observations. The MLSO/CoMP observations provide full-Sun observations of the magnetic field in the corona. The primary observables of CoMP are the four Stokes parameters (I, Q, U, V). In addition, we perform a topology analysis of the models in order to determine the location of quasi-separatrix layers (QSLs). QSLs are used as a proxy to determine where the strong electric current sheets can develop in the corona and also provide important information about the connectivity in complicated magnetic field configuration. We present the major properties of the 3D QSL and FLEDGE maps and the evolution of 3D coronal structures during the magnetofrictional process. We produce FORWARD-modeled observables from our NLFFF models and compare to a toy MHD FORWARD model and the observations.
A Structured Model Reduction Method for Linear Interconnected Systems
NASA Astrophysics Data System (ADS)
Sato, Ryo; Inoue, Masaki; Adachi, Shuichi
2016-09-01
This paper develops a model reduction method for a large-scale interconnected system that consists oflinear dynamic components. In the model reduction, we aim to preserve physical characteristics of each component. To this end, we formulate a structured model reduction problem that reduces the model order of components while preserving the feedback structure. Although there are a few conventional methods for such structured model reduction to preserve stability, they do not explicitly consider performance of the reduced-order feedback system. One of the difficulties in the problem with performance guarantee comes from nonlinearity of a feedback system to each component. The problem is essentially in a class of nonlinear optimization problems, and therefore it cannot be efficiently solved even in numerical computation. In this paper, application of an equivalent transformation and a proper approximation reduces this nonlinear problem to a problem of the weighted linear model reduction. Then, by using the weighted balanced truncation technique, we construct a reduced-order model with preserving the feedback structure to ensure small modeling error. Finally, we verify the effectiveness of the proposed method through numerical experiments.
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
New holographic dark energy model with non-linear interaction
NASA Astrophysics Data System (ADS)
Oliveros, A.; Acero, Mario A.
2015-05-01
In this paper the cosmological evolution of a holographic dark energy model with a non-linear interaction between the dark energy and dark matter components in a FRW type flat universe is analysed. In this context, the deceleration parameter q and the equation state w Λ are obtained. We found that, as the square of the speed of sound remains positive, the model is stable under perturbations since early times; it also shows that the evolution of the matter and dark energy densities are of the same order for a long period of time, avoiding the so-called coincidence problem. We have also made the correspondence of the model with the dark energy densities and pressures for the quintessence and tachyon fields. From this correspondence we have reconstructed the potential of scalar fields and their dynamics.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Model of intermodulation distortion in non-linear multicarrier systems
NASA Astrophysics Data System (ADS)
Frigo, Nicholas J.
1994-02-01
A heuristic model is proposed which allows calculation of the individual spectral components of the intermodulation distortion present in a non-linear system with a multicarrier input. Noting that any given intermodulation product (IMP) can only be created by a subset of the input carriers, we partition them into 'signal' carriers (which create the IMP) and 'noise' carriers, modeled as a Gaussian process. The relationship between an input signal and the statistical average of its output (averaged over the Gaussian noise) is considered to be an effective transfer function. By summing all possible combinations of signal carriers which create power at the IMP frequencies, the distortion power can be calculated exactly as a function of frequency. An analysis of clipping in lightwave CATV links for AM-VSB signals is used to introduce the model, and is compared to a series of experiments.
Model light curves of linear Type II supernovae
Swartz, D.A.; Wheeler, J.C.; Harkness, R.P. )
1991-06-01
Light curves computed from hydrodynamic models of supernova are compared graphically with the average observed B and V-band light curves of linear Type II supernovae. Models are based on the following explosion scenarios: carbon deflagration within a C + O core near the Chandrasekhar mass, electron-capture-induced core collapse of an O-Ne-Mg core of the Chandrasekhar mass, and collapse of an Fe core in a massive star. A range of envelope mass, initial radius, and composition is investigated. Only a narrow range of values of these parameters are consistent with observations. Within this narrow range, most of the observed light curve properties can be obtained in part, but none of the models can reproduce the entire light curve shape and absolute magnitude over the full 200 day comparison period. The observed lack of a plateau phase is explained in terms of a combination of small envelope mass and envelope helium enhancement. The final cobalt tail phase of the light curve can be reproduced only if the mass of explosively synthesized radioactive Ni-56 is small. The results presented here, in conjunction with the observed homogeneity among individual members of the supernova subclass, argue favorably for the O-Ne-Mg core collapse mechanism as an explanation for linear Type II supernovae. The Crab Nebula may arisen from such an explosion. Carbon deflagrations may lead to brighter events like SN 1979C. 62 refs.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Adjusting power for a baseline covariate in linear models
Glueck, Deborah H.; Muller, Keith E.
2009-01-01
SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543
Inverse magnetic catalysis in the linear sigma model
NASA Astrophysics Data System (ADS)
Ayala, A.; Loewe, M.; Zamora, R.
2016-05-01
We compute the critical temperature for the chiral transition in the background of a magnetic field in the linear sigma model, including the quark contribution and the thermo-magnetic effects induced on the coupling constants at one loop level. For the analysis, we go beyond mean field aproximation, by taking one loop thermo-magnetic corrections to the couplings as well as plasma screening effects for the boson's masses, expressed through the ring diagrams. We found inverse magnetic catalysis, i.e. a decreasing of the critical chiral temperature as function of the intensity of the magnetic field, which seems to be in agreement with recent results from the lattice community.
Modeling of Linear Gas Tungsten Arc Welding of Stainless Steel
NASA Astrophysics Data System (ADS)
Maran, P.; Sornakumar, T.; Sundararajan, T.
2008-08-01
A heat and fluid flow model has been developed to solve the gas tungsten arc (GTA) linear welding problem for austenitic stainless steel. The moving heat source problem associated with the electrode traverse has been simplified into an equivalent two-dimensional (2-D) transient problem. The torch residence time has been calculated from the arc diameter and torch speed. The mathematical formulation considers buoyancy, electromagnetic induction, and surface tension forces. The governing equations have been solved by the finite volume method. The temperature and velocity fields have been determined. The theoretical predictions for weld bead geometry are in good agreement with experimental measurements.
Imbedding linear regressions in models for factor crossing
NASA Astrophysics Data System (ADS)
Santos, Carla; Nunes, Célia; Dias, Cristina; Varadinov, Maria; Mexia, João T.
2016-12-01
Given u factors with J1, …, Ju levels we are led to test their effects and interactions. For this we consider an orthogonal partition of Rn, with n =∏l=1uJl, in subspaces associated with the sets of factors. The space corresponding to the set C will have density g (C )=∏l∈C(Jl-1) so that g({1, …, u}) will be much larger than the other number of degrees of freedom when Jl > 2, l = 1, …, u This fact may be used to enrich these models imbedding in them linear regressions.
Linear unmixing using endmember subspaces and physics based modeling
NASA Astrophysics Data System (ADS)
Gillis, David; Bowles, Jeffrey; Ientilucci, Emmett J.; Messinger, David W.
2007-09-01
One of the biggest issues with the Linear Mixing Model (LMM) is that it is implicitly assumed that each of the individual material components throughout the scene may be described using a single dimension (e.g. an endmember vector). In reality, individual pixels corresponding to the same general material class can exhibit a large degree of variation within a given scene. This is especially true in broad background classes such as forests, where the single dimension assumption clearly fails. In practice, the only way to account for the multidimensionality of the class is to choose multiple (very similar) endmembers, each of which represents some part of the class. To address these issues, we introduce the endmember subgroup model, which generalizes the notion of an 'endmember vector' to an 'endmember subspace'. In this model, spectra in a given hyperspectral scene are decomposed as a sum of constituent materials; however, each material is represented by some multidimensional subspace (instead of a single vector). The dimensionality of the subspace will depend on the within-class variation seen in the image. The endmember subgroups can be determined automatically from the data, or can use physics-based modeling techniques to include 'signature subspaces', which are included in the endmember subgroups. In this paper, we give an overview of the subgroup model; discuss methods for determining the endmember subgroups for a given image, and present results showing how the subgroup model improves upon traditional single endmember linear mixing. We also include results that use the 'signature subspace' approach to identifying mixed-pixel targets in HYDICE imagery.
Filtering nonlinear dynamical systems with linear stochastic models
NASA Astrophysics Data System (ADS)
Harlim, J.; Majda, A. J.
2008-06-01
An important emerging scientific issue is the real time filtering through observations of noisy signals for nonlinear dynamical systems as well as the statistical accuracy of spatio-temporal discretizations for filtering such systems. From the practical standpoint, the demand for operationally practical filtering methods escalates as the model resolution is significantly increased. For example, in numerical weather forecasting the current generation of global circulation models with resolution of 35 km has a total of billions of state variables. Numerous ensemble based Kalman filters (Evensen 2003 Ocean Dyn. 53 343-67 Bishop et al 2001 Mon. Weather Rev. 129 420-36 Anderson 2001 Mon. Weather Rev. 129 2884-903 Szunyogh et al 2005 Tellus A 57 528-45 Hunt et al 2007 Physica D 230 112-26) show promising results in addressing this issue; however, all these methods are very sensitive to model resolution, observation frequency, and the nature of the turbulent signals when a practical limited ensemble size (typically less than 100) is used. In this paper, we implement a radical filtering approach to a relatively low (40) dimensional toy model, the L-96 model (Lorenz 1996 Proc. on Predictability (ECMWF, 4-8 September 1995) pp 1-18) in various chaotic regimes in order to address the 'curse of ensemble size' for complex nonlinear systems. Practically, our approach has several desirable features such as extremely high computational efficiency, filter robustness towards variations of ensemble size (we found that the filter is reasonably stable even with a single realization) which makes it feasible for high dimensional problems, and it is independent of any tunable parameters such as the variance inflation coefficient in an ensemble Kalman filter. This radical filtering strategy decouples the problem of filtering a spatially extended nonlinear deterministic system to filtering a Fourier diagonal system of parametrized linear stochastic differential equations (Majda and Grote
Non-Linear Slosh Damping Model Development and Validation
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can
Linear mixed effects models under inequality constraints with applications.
Farnan, Laura; Ivanova, Anastasia; Peddada, Shyamal D
2014-01-01
Constraints arise naturally in many scientific experiments/studies such as in, epidemiology, biology, toxicology, etc. and often researchers ignore such information when analyzing their data and use standard methods such as the analysis of variance (ANOVA). Such methods may not only result in a loss of power and efficiency in costs of experimentation but also may result poor interpretation of the data. In this paper we discuss constrained statistical inference in the context of linear mixed effects models that arise naturally in many applications, such as in repeated measurements designs, familial studies and others. We introduce a novel methodology that is broadly applicable for a variety of constraints on the parameters. Since in many applications sample sizes are small and/or the data are not necessarily normally distributed and furthermore error variances need not be homoscedastic (i.e. heterogeneity in the data) we use an empirical best linear unbiased predictor (EBLUP) type residual based bootstrap methodology for deriving critical values of the proposed test. Our simulation studies suggest that the proposed procedure maintains the desired nominal Type I error while competing well with other tests in terms of power. We illustrate the proposed methodology by re-analyzing a clinical trial data on blood mercury level. The methodology introduced in this paper can be easily extended to other settings such as nonlinear and generalized regression models.
Acoustic FMRI noise: linear time-invariant system model.
Rizzo Sierra, Carlos V; Versluis, Maarten J; Hoogduin, Johannes M; Duifhuis, Hendrikus Diek
2008-09-01
Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For auditory system studies, however, the acoustic noise generated by the scanner tends to interfere with the assessments of this activation. Understanding and modeling fMRI acoustic noise is a useful step to its reduction. To study acoustic noise, the MR scanner is modeled as a linear electroacoustical system generating sound pressure signals proportional to the time derivative of the input gradient currents. The transfer function of one MR scanner is determined for two different input specifications: 1) by using the gradient waveform calculated by the scanner software and 2) by using a recording of the gradient current. Up to 4 kHz, the first method is shown as reliable as the second one, and its use is encouraged when direct measurements of gradient currents are not possible. Additionally, the linear order and average damping properties of the gradient coil system are determined by impulse response analysis. Since fMRI is often based on echo planar imaging (EPI) sequences, a useful validation of the transfer function prediction ability can be obtained by calculating the acoustic output for the EPI sequence. We found a predicted sound pressure level (SPL) for the EPI sequence of 104 dB SPL compared to a measured value of 102 dB SPL. As yet, the predicted EPI pressure waveform shows similarity as well as some differences with the directly measured EPI pressure waveform.
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Linear dynamic models for classification of single-trial EEG.
Samdin, S Balqis; Ting, Chee-Ming; Salleh, Sh-Hussain; Ariff, A K; Mohd Noor, A B
2013-01-01
This paper investigates the use of linear dynamic models (LDMs) to improve classification of single-trial EEG signals. Existing dynamic classification of EEG uses discrete-state hidden Markov models (HMMs) based on piecewise-stationary assumption, which is inadequate for modeling the highly non-stationary dynamics underlying EEG. The continuous hidden states of LDMs could better describe this continuously changing characteristic of EEG, and thus improve the classification performance. We consider two examples of LDM: a simple local level model (LLM) and a time-varying autoregressive (TVAR) state-space model. AR parameters and band power are used as features. Parameter estimation of the LDMs is performed by using expectation-maximization (EM) algorithm. We also investigate different covariance modeling of Gaussian noises in LDMs for EEG classification. The experimental results on two-class motor-imagery classification show that both types of LDMs outperform the HMM baseline, with the best relative accuracy improvement of 14.8% by LLM with full covariance for Gaussian noises. It may due to that LDMs offer more flexibility in fitting the underlying dynamics of EEG.
Some generalisations of linear-graph modelling for dynamic systems
NASA Astrophysics Data System (ADS)
de Silva, Clarence W.; Pourazadi, Shahram
2013-11-01
Proper modelling of a dynamic system can benefit analysis, simulation, design, evaluation and control of the system. The linear-graph (LG) approach is suitable for modelling lumped-parameter dynamic systems. By using the concepts of graph trees, it provides a graphical representation of the system, with a direct correspondence to the physical component topology. This paper systematically extends the application of LGs to multi-domain (mixed-domain or multi-physics) dynamic systems by presenting a unified way to represent different domains - mechanical, electrical, thermal and fluid. Preservation of the structural correspondence across domains is a particular advantage of LGs when modelling mixed-domain systems. The generalisation of Thevenin and Norton equivalent circuits to mixed-domain systems, using LGs, is presented. The structure of an LG model may follow a specific pattern. Vector LGs are introduced to take advantage of such patterns, giving a general LG representation for them. Through these vector LGs, the model representation becomes simpler and rather compact, both topologically and parametrically. A new single LG element is defined to facilitate the modelling of distributed-parameter (DP) systems. Examples are presented using multi-domain systems (a motion-control system and a flow-controlled pump), a multi-body mechanical system (robot manipulator) and DP systems (structural rods) to illustrate the application and advantages of the methodologies developed in the paper.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Probabilistic model of ligaments and tendons: Quasistatic linear stretching
NASA Astrophysics Data System (ADS)
Bontempi, M.
2009-03-01
Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers’ structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.
Pointwise Description for the Linearized Fokker-Planck-Boltzmann Model
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2015-09-01
In this paper, we study the pointwise (in the space variable) behavior of the linearized Fokker-Planck-Boltzmann model for nonsmooth initial perturbations. The result reveals both the fluid and kinetic aspects of this model. The fluid-like waves are constructed as the long-wave expansion in the spectrum of the Fourier modes for the space variable, and it has polynomial time decay rate. We design a Picard-type iteration for constructing the increasingly regular kinetic-like waves, which are carried by the transport equations and have exponential time decay rate. The Mixture Lemma plays an important role in constructing the kinetic-like waves, this lemma was originally introduced by Liu-Yu (Commun Pure Appl Math 57:1543-1608, 2004) for Boltzmann equation, but the Fokker-Planck term in this paper creates some technical difficulties.
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
NASA Technical Reports Server (NTRS)
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
Probabilistic model of ligaments and tendons: quasistatic linear stretching.
Bontempi, M
2009-03-01
Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers' structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.
Robust cross-validation of linear regression QSAR models.
Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny
2008-10-01
A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets.
Electroweak corrections and unitarity in linear moose models
Chivukula, R. Sekhar; Simmons, Elizabeth H.; He, H.-J.; Kurachi, Masafumi; Tanabashi, Masaharu
2005-02-01
We calculate the form of the corrections to the electroweak interactions in the class of Higgsless models which can be deconstructed to a chain of SU(2) gauge groups adjacent to a chain of U(1) gauge groups, and with the fermions coupled to any single SU(2) group and to any single U(1) group along the chain. The primary advantage of our technique is that the size of corrections to electroweak processes can be directly related to the spectrum of vector bosons ('KK modes'). In Higgsless models, this spectrum is constrained by unitarity. Our methods also allow for arbitrary background 5D geometry, spatially dependent gauge-couplings, and brane kinetic energy terms. We find that, due to the size of corrections to electroweak processes in any unitary theory, Higgsless models with localized fermions are disfavored by precision electroweak data. Although we stress our results as they apply to continuum Higgsless 5D models, they apply to any linear moose model including those with only a few extra vector bosons. Our calculations of electroweak corrections also apply directly to the electroweak gauge sector of 5D theories with a bulk scalar Higgs boson; the constraints arising from unitarity do not apply in this case.
Gradient-Stable Linear Time Steps for Phase Field Models
NASA Astrophysics Data System (ADS)
Vollmayr-Lee, Benjamin
2013-03-01
Phase field models, which are nonlinear partial-differential equations, are a widely used for modeling the dynamics and equilibrium properties of materials. Unfortunately, time marching the equations of motion by explicit methods is usually numerically unstable unless the size of the time step is kept below a lattice-dependent threshold. Consequently, the amount of numerical computation is determined by avoidance of the instability rather than by the natural time scale of the dynamics. This can be a severe overhead. In contrast, a gradient stable method ensures a decreasing free energy, consistent with the relaxational dynamics of the continuous time model. Eyre's theorem proved that gradient stable schemes are possible, and Eyre presented a framework for constructing gradient-stable, semi-implicit time steps for a given phase-field model. Here I present a new theorem that provides a broader class of gradient-stable steps, in particular ones in which the implicit part of the equation is linear. This enables use of fast Fourier transforms to solve for the updated field, providing a considerable advantage in speed and simplicity. Examples will be presented for the Allen-Cahn and Cahn-Hilliard equations, an Ehrlich-Schwoebel-type interface growth model, and block copolymers.
Subthreshold linear modeling of dendritic trees: a computational approach.
Khodaei, Alireza; Pierobon, Massimiliano
2016-08-01
The design of communication systems based on the transmission of information through neurons is envisioned as a key technology for the pervasive interconnection of future wearable and implantable devices. While previous literature has mainly focused on modeling propagation of electrochemical spikes carrying natural information through the nervous system, in recent work the authors of this paper proposed the so-called subthreshold electrical stimulation as a viable technique to propagate artificial information through neurons. This technique promises to limit the interference with natural communication processes, and it can be successfully approximated with linear models. In this paper, a novel model is proposed to account for the subthreshold stimuli propagation from the dendritic tree to the soma of a neuron. A computational approach is detailed to obtain this model for a given realistic 3D dendritic tree with an arbitrary morphology. Numerical results from the model are obtained over a stimulation signal bandwidth of 1KHz, and compared with the results of a simulation through the NEURON software.
Linear-Nonlinear-Poisson Models of Primate Choice Dynamics
Corrado, Greg S; Sugrue, Leo P; Sebastian Seung, H; Newsome, William T
2005-01-01
The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency. PMID:16596981
On the unnecessary ubiquity of hierarchical linear modeling.
McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D
2017-03-01
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modeling Electric Vehicle Benefits Connected to Smart Grids
Stadler, Michael; Marnay, Chris; Mendes, Goncalo; Kloess, Maximillian; Cardoso, Goncalo; Mégel, Olivier; Siddiqui, Afzal
2011-07-01
Connecting electric storage technologies to smartgrids will have substantial implications in building energy systems. Local storage will enable demand response. Mobile storage devices in electric vehicles (EVs) are in direct competition with conventional stationary sources at the building. EVs will change the financial as well as environmental attractiveness of on-site generation (e.g. PV, or fuel cells). In order to examine the impact of EVs on building energy costs and CO2 emissions in 2020, a distributed-energy-resources adoption problem is formulated as a mixed-integer linear program with minimization of annual building energy costs or CO2 emissions. The mixed-integer linear program is applied to a set of 139 different commercial buildings in California and example results as well as the aggregated economic and environmental benefits are reported. The research shows that considering second life of EV batteries might be very beneficial for commercial buildings.
The linear reservoir model: conceptual or physically based?
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Lawrence, Deborah
2017-04-01
From a gridded catchment (25 x 25 m), we have investigated the distribution of distances from grid points to the nearest river reach. Based on 130 Norwegian catchments, we find that an exponential distribution fits the empirical distance distributions very well. Such a distribution is very informative regarding how the catchment area is organised with respect to the river network and can be used to easily determine the catchment fractional area as a function of distance from the river network. This is important for runoff dynamics since the travel times of water in the soils is slower than that in the river network by several orders of magnitude. If we consider the fractional areas for each distance interval, the properties of the exponential distance distribution dictate that the ratio between consecutive fractional areas is a constant, κ . Furthermore, if we assume that after a precipitation event, water is propagated through the soils to the river network with a constant celerity/velocity, the ratio between volumes of water drained into the river network at each time step is a constant and equal to κ. A linear reservoir has the same property of consecutive runoff volumes having a constant ratio and if the velocity/celerity is such that the distance interval between the consecutive areas is the distance travelled by water for each time step, Δt, then the rate constant, θ, of the linear reservoir is a straightforward function of the constant κ, θ=(1-κ)/Δt . The fact that exponential distance distributions are found for so many (actually all we have investigated) Norwegian catchments suggests that rainfall-runoff models based on linear reservoirs can no longer be dismissed as purely conceptual, as they clearly reflect the physical dynamics of the runoff generation processes at the catchment scale.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods.
Linear regression models for solvent accessibility prediction in proteins.
Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław
2005-04-01
The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
ERIC Educational Resources Information Center
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
ERIC Educational Resources Information Center
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Direction of Effects in Multiple Linear Regression Models.
Wiedermann, Wolfgang; von Eye, Alexander
2015-01-01
Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.
Feedbacks, climate sensitivity, and the limits of linear models
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Knutti, R.
2015-12-01
The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.
Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.
Figura, Simon; Livingstone, David M; Kipfer, Rolf
2015-01-01
Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.
Linear No-Threshold Model VS. Radiation Hormesis
Doss, Mohan
2013-01-01
The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Linear model for fast background subtraction in oligonucleotide microarrays
2009-01-01
Background One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. Results We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. Conclusion The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry. PMID:19917117
Gauged linear sigma model and pion-pion scattering
Fariborz, Amir H.; Schechter, Joseph; Shahid, M. Naeem
2009-12-01
A simple gauged linear sigma model with several parameters to take the symmetry breaking and the mass differences between the vector meson and the axial vector meson into account is considered here as a possibly useful 'template' for the role of a light scalar in QCD as well as for (at a different scale) an effective Higgs sector for some recently proposed walking technicolor models. An analytic procedure is first developed for relating the Lagrangian parameters to four well established (in the QCD application) experimental inputs. One simple equation distinguishes three different cases: i. QCD with axial vector particle heavier than vector particle, ii. possible technicolor model with vector particle heavier than the axial vector one, iii. the unphysical QCD case where both the Kawarabayashi-Suzuki-Riazuddin-Fayazuddin and Weinberg relations hold. The model is applied to the s-wave pion-pion scattering in QCD. Both the near threshold region and (with an assumed unitarization) the 'global' region up to about 800 MeV are considered. It is noted that there is a little tension between the choice of 'bare' sigma mass parameter for describing these two regions. If a reasonable 'global' fit is made, there is some loss of precision in the near threshold region.
A linear geospatial streamflow modeling system for data sparse environments
Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James
2008-01-01
In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.
Identifying multiple change points in a linear mixed effects model.
Lai, Yinglei; Albert, Paul S
2014-03-15
Although change-point analysis methods for longitudinal data have been developed, it is often of interest to detect multiple change points in longitudinal data. In this paper, we propose a linear mixed effects modeling framework for identifying multiple change points in longitudinal Gaussian data. Specifically, we develop a novel statistical and computational framework that integrates the expectation-maximization and the dynamic programming algorithms. We conduct a comprehensive simulation study to demonstrate the performance of our method. We illustrate our method with an analysis of data from a trial evaluating a behavioral intervention for the control of type I diabetes in adolescents with HbA1c as the longitudinal response variable. Copyright © 2013 John Wiley & Sons, Ltd.
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
The linear Ising model and its analytic continuation, random walk
NASA Astrophysics Data System (ADS)
Lavenda, B. H.
2004-02-01
A generalization of Gauss's principle is used to derive the error laws corresponding to Types II and VII distributions in Pearson's classification scheme. Student's r-p.d.f. (Type II) governs the distribution of the internal energy of a uniform, linear chain, Ising model, while the analytic continuation of the uniform exchange energy converts it into a Student t-density (Type VII) for the position of a random walk in a single spatial dimension. Higher-dimensional spaces, corresponding to larger degrees of freedom and generalizations to multidimensional Student r- and t-densities, are obtained by considering independent and identically random variables, having rotationally invariant densities, whose entropies are additive and generating functions are multiplicative.
A Linear City Model with Asymmetric Consumer Distribution
Azar, Ofer H.
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
A linear city model with asymmetric consumer distribution.
Azar, Ofer H
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms' costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms' prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city's midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry - the consumer distribution and the cost per unit - interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics.
Simulating annual glacier flow with a linear reservoir model
NASA Astrophysics Data System (ADS)
Span, Norbert; Kuhn, Michael
2003-05-01
In this paper we present a numerical simulation of the observation that most alpine glaciers have reached peak velocities in the early 1980s followed by nearly exponential decay of velocity in the subsequent decade. We propose that similarity exists between precipitation and associated runoff hydrograph in a river basin on one side and annual mean specific mass balance of the accumulation area of alpine glaciers and ensuing changes in ice flow on the other side. The similarity is expressed in terms of a linear reservoir with fluctuating input where the year to year change of ice velocity is governed by two terms, a fraction of the velocity of the previous year as a recession term and the mean specific balance of the accumulation area of the current year as a driving term. The coefficients of these terms directly relate to the timescale, the mass balance/altitude profile, and the geometric scale of the glacier. The model is well supported by observations in the upper part of the glacier where surface elevation stays constant to within ±5 m over a 30 year period. There is no temporal trend in the agreement between observed and modeled horizontal velocities and no difference between phases of acceleration and phases of deceleration, which means that the model is generally valid for a given altitude on a given glacier.
Optimal CH-47 AND C-130 Workload Balance
2011-03-01
42 LINGO -based Model Development .......................................................................42 Summary...58 Appendix A. LINGO -Based Model...scenario and network is the first step towards developing a mixed integer linear program in the LINGO ® software environment. The program development
Comparison of Linear and Non-Linear Regression Models to Estimate Leaf Area Index of Dryland Shrubs.
NASA Astrophysics Data System (ADS)
Dashti, H.; Glenn, N. F.; Ilangakoon, N. T.; Mitchell, J.; Dhakal, S.; Spaete, L.
2015-12-01
Leaf area index (LAI) is a key parameter in global ecosystem studies. LAI is considered a forcing variable in land surface processing models since ecosystem dynamics are highly correlated to LAI. In response to environmental limitations, plants in semiarid ecosystems have smaller leaf area, making accurate estimation of LAI by remote sensing a challenging issue. Optical remote sensing (400-2500 nm) techniques to estimate LAI are based either on radiative transfer models (RTMs) or statistical approaches. Considering the complex radiation field of dry ecosystems, simple 1-D RTMs lead to poor results, and on the other hand, inversion of more complex 3-D RTMs is a demanding task which requires the specification of many variables. A good alternative to physical approaches is using methods based on statistics. Similar to many natural phenomena, there is a non-linear relationship between LAI and top of canopy electromagnetic waves reflected to optical sensors. Non-linear regression models can better capture this relationship. However, considering the problem of a few numbers of observations in comparison to the feature space (n
models will not necessarily outperform the more simple linear models. In this study linear versus non-linear regression techniques were investigated to estimate LAI. Our study area is located in southwestern Idaho, Great Basin. Sagebrush (Artemisia tridentata spp) serves a critical role in maintaining the structure of this ecosystem. Using a leaf area meter (Accupar LP-80), LAI values were measured in the field. Linear Partial Least Square regression and non-linear, tree based Random Forest regression have been implemented to estimate the LAI of sagebrush from hyperspectral data (AVIRIS-ng) collected in late summer 2014. Cross validation of results indicate that PLS can provide comparable results to Random Forest.
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated
Kohli, Nidhi; Hughes, John; Wang, Chun; Zopluoglu, Cengiz; Davison, Mark L
2015-06-01
A linear-linear piecewise growth mixture model (PGMM) is appropriate for analyzing segmented (disjointed) change in individual behavior over time, where the data come from a mixture of 2 or more latent classes, and the underlying growth trajectories in the different segments of the developmental process within each latent class are linear. A PGMM allows the knot (change point), the time of transition from 1 phase (segment) to another, to be estimated (when it is not known a priori) along with the other model parameters. To assist researchers in deciding which estimation method is most advantageous for analyzing this kind of mixture data, the current research compares 2 popular approaches to inference for PGMMs: maximum likelihood (ML) via an expectation-maximization (EM) algorithm, and Markov chain Monte Carlo (MCMC) for Bayesian inference. Monte Carlo simulations were carried out to investigate and compare the ability of the 2 approaches to recover the true parameters in linear-linear PGMMs with unknown knots. The results show that MCMC for Bayesian inference outperformed ML via EM in nearly every simulation scenario. Real data examples are also presented, and the corresponding computer codes for model fitting are provided in the Appendix to aid practitioners who wish to apply this class of models.
Fourth standard model family neutrino at future linear colliders
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.
Linear models for sound from supersonic reacting mixing layers
NASA Astrophysics Data System (ADS)
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
Linear Models Based on Noisy Data and the Frisch Scheme*
Ning, Lipeng; Georgiou, Tryphon T.; Tannenbaum, Allen; Boyd, Stephen P.
2016-01-01
We address the problem of identifying linear relations among variables based on noisy measurements. This is a central question in the search for structure in large data sets. Often a key assumption is that measurement errors in each variable are independent. This basic formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930s. Various topics such as errors-in-variables, factor analysis, and instrumental variables all refer to alternative viewpoints on this problem and on ways to account for the anticipated way that noise enters the data. In the present paper we begin by describing certain fundamental contributions by the founders of the field and provide alternative modern proofs to certain key results. We then go on to consider a modern viewpoint and novel numerical techniques to the problem. The central theme is expressed by the Frisch–Kalman dictum, which calls for identifying a noise contribution that allows a maximal number of simultaneous linear relations among the noise-free variables—a rank minimization problem. In the years since Frisch’s original formulation, there have been several insights, including trace minimization as a convenient heuristic to replace rank minimization. We discuss convex relaxations and theoretical bounds on the rank that, when met, provide guarantees for global optimality. A complementary point of view to this minimum-rank dictum is presented in which models are sought leading to a uniformly optimal quadratic estimation error for the error-free variables. Points of contact between these formalisms are discussed, and alternative regularization schemes are presented. PMID:27168672
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Estimating population trends with a linear model: Technical comments
Sauer, John R.; Link, William A.; Royle, J. Andrew
2004-01-01
Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.
Non linear dynamics of flame cusps: from experiments to modeling
NASA Astrophysics Data System (ADS)
Almarcha, Christophe; Radisson, Basile; Al-Sarraf, Elias; Quinard, Joel; Villermaux, Emmanuel; Denet, Bruno; Joulin, Guy
2016-11-01
The propagation of premixed flames in a medium initially at rest exhibits the appearance and competition of elementary local singularities called cusps. We investigate this problem both experimentally and numerically. An analytical solution of the two-dimensional Michelson Sivashinsky equation is obtained as a composition of pole solutions, which is compared with experimental flames fronts propagating between glass plates separated by a thin gap width. We demonstrate that the front dynamics can be reproduced numerically with a good accuracy, from the linear stages of destabilization to its late time evolution, using this model-equation. In particular, the model accounts for the experimentally observed steady distribution of distances between cusps, which is well-described by a one-parameter Gamma distribution, reflecting the aggregation type of interaction between the cusps. A modification of the Michelson Sivashinsky equation taking into account gravity allows to reproduce some other special features of these fronts. Aix-Marseille Univ., IRPHE, UMR 7342 CNRS, Centrale Marseille, Technopole de Château Gombert, 49 rue F. Joliot Curie, 13384 Marseille Cedex 13, France.
Linear System Models for Ultrasonic Imaging: Application to Signal Statistics
Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.
2009-01-01
Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176
Wear-caused deflection evolution of a slide rail, considering linear and non-linear wear models
NASA Astrophysics Data System (ADS)
Kim, Dongwook; Quagliato, Luca; Park, Donghwi; Murugesan, Mohanraj; Kim, Naksoo; Hong, Seokmoo
2017-05-01
The research presented in this paper details an experimental-numerical approach for the quantitative correlation between wear and end-point deflection in a slide rail. Focusing the attention on slide rail utilized in white-goods applications, the aim is to evaluate the number of cycles the slide rail can operate, under different load conditions, before it should be replaced due to unacceptable end-point deflection. In this paper, two formulations are utilized to describe the wear: Archard model for the linear wear and Lemaitre damage model for the nonlinear wear. The linear wear gradually reduces the surface of the slide rail whereas the nonlinear one accounts for the surface element deletion (i.e. due to pitting). To determine the constants to use in the wear models, simple tension test and sliding wear test, by utilizing a designed and developed experiment machine, have been carried out. A full slide rail model simulation has been implemented in ABAQUS including both linear and non-linear wear models and the results have been compared with those of the real rails under different load condition, provided by the rail manufacturer. The comparison between numerically estimated and real rail results proved the reliability of the developed numerical model, limiting the error in a ±10% range. The proposed approach allows predicting the displacement vs cycle curves, parametrized for different loads and, based on a chosen failure criterion, to predict the lifetime of the rail.
Formal modeling and verification of fractional order linear systems.
Zhao, Chunna; Shi, Likun; Guan, Yong; Li, Xiaojuan; Shi, Zhiping
2016-05-01
This paper presents a formalization of a fractional order linear system in a higher-order logic (HOL) theorem proving system. Based on the formalization of the Grünwald-Letnikov (GL) definition, we formally specify and verify the linear and superposition properties of fractional order systems. The proof provides a rigor and solid underpinnings for verifying concrete fractional order linear control systems. Our implementation in HOL demonstrates the effectiveness of our approach in practical applications.
Modeling Seismoacoustic Propagation from the Nonlinear to Linear Regimes
NASA Astrophysics Data System (ADS)
Chael, E. P.; Preston, L. A.
2015-12-01
Explosions at shallow depth-of-burial can cause nonlinear material response, such as fracturing and spalling, up to the ground surface above the shot point. These motions at the surface affect the generation of acoustic waves into the atmosphere, as well as the surface-reflected compressional and shear waves. Standard source scaling models for explosions do not account for such nonlinear interactions above the shot, while some recent studies introduce a non-isotropic addition to the moment tensor to represent them (e.g., Patton and Taylor, 2011). We are using Sandia's CTH shock physics code to model the material response in the vicinity of underground explosions, up to the overlying ground surface. Across a boundary where the motions have decayed to nearly linear behavior, we couple the signals from CTH into a linear finite-difference (FD) seismoacoustic code to efficiently propagate the wavefields to greater distances. If we assume only one-way transmission of energy through the boundary, then the particle velocities there suffice as inputs for the FD code, simplifying the specification of the boundary condition. The FD algorithm we use applies the wave equations for velocity in an elastic medium and pressure in an acoustic one, and matches the normal traction and displacement across the interface. Initially we are developing and testing a 2D, axisymmetric seismoacoustic routine; CTH can use this geometry in the source region as well. The Source Physics Experiment (SPE) in Nevada has collected seismic and acoustic data on numerous explosions at different scaled depths, providing an excellent testbed for investigating explosion phenomena (Snelson et al., 2013). We present simulations for shots SPE-4' and SPE-5, illustrating the importance of nonlinear behavior up to the ground surface. Our goal is to develop the capability for accurately predicting the relative signal strengths in the air and ground for a given combination of source yield and depth. Sandia National
Complex dynamics in the Oregonator model with linear delayed feedback
NASA Astrophysics Data System (ADS)
Sriram, K.; Bernard, S.
2008-06-01
The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.
Process Setting through General Linear Model and Response Surface Method
NASA Astrophysics Data System (ADS)
Senjuntichai, Angsumalin
2010-10-01
The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.
Amplitude relations in non-linear sigma model
NASA Astrophysics Data System (ADS)
Chen, Gang; Du, Yi-Jian
2014-01-01
In this paper, we investigate tree-level scattering amplitude relations in U( N) non-linear sigma model. We use Cayley parametrization. As was shown in the recent works [23,24], both on-shell amplitudes and off-shell currents with odd points have to vanish under Cayley parametrization. We prove the off-shell U(1) identity and fundamental BCJ relation for even-point currents. By taking the on-shell limits of the off-shell relations, we show that the color-ordered tree amplitudes with even points satisfy U(1)-decoupling identity and fundamental BCJ relation, which have the same formations within Yang-Mills theory. We further state that all the on-shell general KK, BCJ relations as well as the minimal-basis expansion are also satisfied by color-ordered tree amplitudes. As a consequence of the relations among color-ordered amplitudes, the total 2 m-point tree amplitudes satisfy DDM form of color decomposition as well as KLT relation.
Generalized linear model for estimation of missing daily rainfall data
NASA Astrophysics Data System (ADS)
Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed
2017-04-01
The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.
Markov Boundary Discovery with Ridge Regularized Linear Models
Visweswaran, Shyam
2016-01-01
Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915
Investigating follow-up outcome change using hierarchical linear modeling.
Ogrodniczuk, J S; Piper, W E; Joyce, A S
2001-03-01
Individual change in outcome during a one-year follow-up period for 98 patients who received either interpretive or supportive psychotherapy was examined using hierarchical linear modeling (HLM). This followed a previous study that had investigated average (treatment condition) change during follow-up using traditional methods of data analysis (repeated measures ANOVA, chi-square tests). We also investigated whether two patient personality characteristics-quality of object relations (QOR) and psychological mindedness (PM)-predicted individual change. HLM procedures yielded findings that were not detected using traditional methods of data analysis. New findings indicated that the rate of individual change in outcome during follow-up varied significantly among the patients. QOR was directly related to favorable individual change for supportive therapy patients, but not for patients who received interpretive therapy. The findings have implications for determining which patients will show long-term benefit following short-term supportive therapy and how to enhance it. The study also found significant associations between QOR and final outcome level.
Wu, Z; Zhang, Y
2008-01-01
The double digestion problem for DNA restriction mapping has been proved to be NP-complete and intractable if the numbers of the DNA fragments become large. Several approaches to the problem have been tested and proved to be effective only for small problems. In this paper, we formulate the problem as a mixed-integer linear program (MIP) by following (Waterman, 1995) in a slightly different form. With this formulation and using state-of-the-art integer programming techniques, we can solve randomly generated problems whose search space sizes are many-magnitude larger than previously reported testing sizes.
Accurate bolus arrival time estimation using piecewise linear model fitting
NASA Astrophysics Data System (ADS)
Abdou, Elhassan; de Mey, Johan; De Ridder, Mark; Vandemeulebroucke, Jef
2017-02-01
Dynamic contrast-enhanced computed tomography (DCE-CT) is an emerging radiological technique, which consists in acquiring a rapid sequence of CT images, shortly after the injection of an intravenous contrast agent. The passage of the contrast agent in a tissue results in a varying CT intensity over time, recorded in time-attenuation curves (TACs), which can be related to the contrast supplied to that tissue via the supplying artery to estimate the local perfusion and permeability characteristics. The time delay between the arrival of the contrast bolus in the feeding artery and the tissue of interest, called the bolus arrival time (BAT), needs to be determined accurately to enable reliable perfusion analysis. Its automated identification is however highly sensitive to noise. We propose an accurate and efficient method for estimating the BAT from DCE-CT images. The method relies on a piecewise linear TAC model with four segments and suitable parameter constraints for limiting the range of possible values. The model is fitted to the acquired TACs in a multiresolution fashion using an iterative optimization approach. The performance of the method was evaluated on simulated and real perfusion data of lung and rectum tumours. In both cases, the method was found to be stable, leading to average accuracies in the order of the temporal resolution of the dynamic sequence. For reasonable levels of noise, the results were found to be comparable to those obtained using a previously proposed method, employing a full search algorithm, but requiring an order of magnitude more computation time.
Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.
NASA Astrophysics Data System (ADS)
O'Malley, C.; White, N. J.
2016-12-01
It is widely accepted that the stream power law can be used to describe the evolution of longitudinal river profiles. Over the last 5 years, this phenomenological law has been used to develop non-linear and linear inversion algorithms that enable uplift rate histories to be calculated by minimizing the misfit between observed and calculated river profiles. Substantial, continent-wide inventories of river profiles have been successfully inverted to yield uplift as a function of time and space. Erosional parameters can be determined by independent geological calibration. Our results help to illuminate empirical scaling laws that are well known to the geomorphological community. Here we present an analysis of river profiles from Asia. The timing and magnitude of uplift events across Asia, including the Himalayas and Tibet, have long been debated. River profile analyses have played an important role in clarifying the timing of uplift events. However, no attempt has yet been made to invert a comprehensive database of river profiles from the entire region. Asian rivers contain information which allows us to investigate putative uplift events quantitatively and to determine a cumulative uplift history for Asia. Long wavelength shapes of river profiles are governed by regional uplift and moderated by erosional processes. These processes are parameterised using the stream power law in the form of an advective-diffusive equation. Our non-negative, least-squares inversion scheme was applied to an inventory of 3722 Asian river profiles. We calibrate the key erosional parameters by predicting solid sedimentary flux for a set of Asian rivers and by comparing the flux predictions against published depositional histories for major river deltas. The resultant cumulative uplift history is compared with a range of published geological constraints for uplift and palaeoelevation. We have found good agreement for many regions across Asia. Surprisingly, single values of erosional
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
ERIC Educational Resources Information Center
Li, Fuzhong; Duncan, Terry E.; Harmer, Peter; Acock, Alan; Stoolmiller, Mike
1998-01-01
Discusses the utility of multilevel confirmatory factor analysis and hierarchical linear modeling methods in testing measurement models in which the underlying attribute may vary as a function of levels of observation. A real dataset is used to illustrate the two approaches and their comparability. (SLD)
Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor
2009-01-01
In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor
2009-01-01
In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…
A log-linear multidimensional Rasch model for capture-recapture.
Pelle, E; Hessen, D J; van der Heijden, P G M
2016-02-20
In this paper, a log-linear multidimensional Rasch model is proposed for capture-recapture analysis of registration data. In the model, heterogeneity of capture probabilities is taken into account, and registrations are viewed as dichotomously scored indicators of one or more latent variables that can account for correlations among registrations. It is shown how the probability of a generic capture profile is expressed under the log-linear multidimensional Rasch model and how the parameters of the traditional log-linear model are derived from those of the log-linear multidimensional Rasch model. Finally, an application of the model to neural tube defects data is presented.
Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.
Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L
2012-03-01
Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.
Wen, Xiaoquan
2015-10-01
We consider the problems of hypothesis testing and model comparison under a flexible Bayesian linear regression model whose formulation is closely connected with the linear mixed effect model and the parametric models for Single Nucleotide Polymorphism (SNP) set analysis in genetic association studies. We derive a class of analytic approximate Bayes factors and illustrate their connections with a variety of frequentist test statistics, including the Wald statistic and the variance component score statistic. Taking advantage of Bayesian model averaging and hierarchical modeling, we demonstrate some distinct advantages and flexibilities in the approaches utilizing the derived Bayes factors in the context of genetic association studies. We demonstrate our proposed methods using real or simulated numerical examples in applications of single SNP association testing, multi-locus fine-mapping and SNP set association testing. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis
2015-09-14
problem as a cardinality constrained quadratic program and study its computational complexity. Furthermore, we develop novel semi - definite relaxation (SDR...each application scenario, we first characterize the computational complexity of the joint optimization problem, and then propose novel semi - definite ...cardinality constrained quadratic programs ( QP ) and the low rank matrix completion problems. The project addresses a fundamental question regarding the
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis
2016-10-11
constrained quadratic programs, and the matrix completion problems with non-convex regularity. The project addresses a fundamental question how to...efficiently solve these problems, such as to find a provably high quality approximate solution or to fast find a local solution with probable structure ...applications in optimal and dynamic resource management, cardinality constrained quadratic programs, and the matrix completion problems with non
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
NASA Astrophysics Data System (ADS)
Collier, W.; Milian Sanz, J.
2016-09-01
The length and flexibility of wind turbine blades are increasing over time. Typically, the dynamic response of the blades is analysed using linear models of blade deflection, enhanced by various ad-hoc non-linear correction models. For blades undergoing large deflections, the small deflection assumption inherent to linear models becomes less valid. It has previously been demonstrated that linear and nonlinear blade models can show significantly different blade response, particularly for blade torsional deflection, leading to load prediction differences. There is a need to evaluate how load predictions from these two approaches compare to measurement data from the field. In this paper, time domain simulations in turbulent wind are carried out using the aero-elastic code Bladed with linear and non-linear blade deflection models. The turbine blade load and deflection simulation results are compared to measurement data from an onshore prototype of the GE 6MW Haliade turbine, which features 73.5m long LM blades. Both linear and non-linear blade models show a good match to measurement turbine load and blade deflections. Only the blade loads differ significantly between the two models, with other turbine loads not strongly affected. The non-linear blade model gives a better match to the measured blade root flapwise damage equivalent load, suggesting that the flapwise dynamic behaviour is better captured by the non-linear blade model. Conversely, the linear blade model shows a better match to measurements in some areas such as blade edgewise damage equivalent load.
Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate
Modeling Herriott cells using the linear canonical transform.
Dahlen, Dar; Wilcox, Russell; Leemans, Wim
2017-01-10
We demonstrate a new way to analyze stable, multipass optical cavities (Herriott cells), using the linear canonical transform formalism, showing that re-entrant designs reproduce an arbitrary input field at the output, resulting in useful symmetries. We use this analysis to predict the stability of cavities used in interferometric delay lines for temporal pulse addition.
Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking
2015-10-20
412TW-PA-15238 Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking DANIEL T. LAIRD AIR...COVERED (From - To) 20 OCT 15 – 23 OCT 15 4. TITLE AND SUBTITLE Analysis of Covariance with Linear Regression Error Model on Antenna Control Tracking...supplement technical expertise, rather than rely solely on expertise, which is subjective. In this paper we apply linear regression modeling and
NASA Astrophysics Data System (ADS)
Beardsell, Alec; Collier, William; Han, Tao
2016-09-01
There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.
Linear programming model to develop geodiversity map using utility theory
NASA Astrophysics Data System (ADS)
Sepehr, Adel
2015-04-01
In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.
ATOPS B-737 inner-loop control system linear model construction and verification
NASA Technical Reports Server (NTRS)
Broussard, J. R.
1983-01-01
Nonlinear models and block diagrams of an inner-loop control system for the ATOPS B-737 Research Aircraft are presented. Continuous time linear model representations of the nonlinear inner-loop control systems are derived. Closed-loop aircraft simulations comparing nonlinear and linear dynamic responses to step inputs are used to verify the inner-loop control system models.
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
Power and Bias in Hierarchical Linear Growth Models: More Measurements of Fewer People
ERIC Educational Resources Information Center
Haardoerfer, Regine
2010-01-01
Hierarchical Linear Modeling (HLM) sample size recommendations are mostly made with traditional group-design research in mind, as HLM as been used almost exclusively in group-design studies. Single-case research can benefit from utilizing hierarchical linear growth modeling, but sample size recommendations for growth modeling with HLM are scarce…
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
Kizilkaya, Kadir; Tempelman, Robert J
2005-01-01
We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567
Modeling and Vibration Suppression for Fast Moving Linear Robots
NASA Astrophysics Data System (ADS)
Gattringer, H.; Kilian, F. J.; Höbarth, W.; Bremer, H.
2010-09-01
This paper deals with vibration suppression for elastic linear robots consisting of elastic beams, bearings and motor gear units. It is of vital importance to use a structured method for deriving the equations of motion for this nonlinear multi body system. The Projection Equation in subsystem form, a synthetical method for calculating the dynamical equations of motion in combination with the Ritz approximation technique, leads to highly nonlinear ordinary differential equations which can be integrated numerically. The control scheme is based on a feedforward part and a feedback loop. A Taylor expansion up to first order leading to a linear time variant system delivers the feedforward torques and a precalculation of the elastic endeffector deflections which can be compensated by a correction of the desired trajectory. Simulation and experimental results are presented.
Linear relaxation in large two-dimensional Ising models
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.
2016-02-01
Critical dynamics in two-dimension Ising lattices up to 2048 ×2048 is simulated on field-programmable-gate-array- based computing devices. Linear relaxation times are measured from extremely long Monte Carlo simulations. The longest simulation has 7.1 ×1016 spin updates, which would take over 37 years to simulate on a general purpose computer. The linear relaxation time of the Ising lattices is found to follow the dynamic scaling law for correlation lengths as long as 2048. The dynamic exponent z of the system is found to be 2.179(12), which is consistent with previous studies of Ising lattices with shorter correlation lengths. It is also found that Monte Carlo simulations of critical dynamics in Ising lattices larger than 512 ×512 are very sensitive to the statistical correlations between pseudorandom numbers, making it even more difficult to study such large systems.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Results and Comparison from the SAM Linear Fresnel Technology Performance Model: Preprint
Wagner, M. J.
2012-04-01
This paper presents the new Linear Fresnel technology performance model in NREL's System Advisor Model. The model predicts the financial and technical performance of direct-steam-generation Linear Fresnel power plants, and can be used to analyze a range of system configurations. This paper presents a brief discussion of the model formulation and motivation, and provides extensive discussion of the model performance and financial results. The Linear Fresnel technology is also compared to other concentrating solar power technologies in both qualitative and quantitative measures. The Linear Fresnel model - developed in conjunction with the Electric Power Research Institute - provides users with the ability to model a variety of solar field layouts, fossil backup configurations, thermal receiver designs, and steam generation conditions. This flexibility aims to encompass current market solutions for the DSG Linear Fresnel technology, which is seeing increasing exposure in fossil plant augmentation and stand-alone power generation applications.
Wu, Tsan-Pei; Wang, Xiao-Qun; Guo, Guang-Yu; Anders, Frithjof; Chung, Chung-Hou
2016-05-05
The quantum criticality of the two-lead two-channel pseudogap Anderson impurity model is studied. Based on the non-crossing approximation (NCA) and numerical renormalization group (NRG) approaches, we calculate both the linear and nonlinear conductance of the model at finite temperatures with a voltage bias and a power-law vanishing conduction electron density of states, ρc(ω) proportional |ω − μF|(r) (0 < r < 1) near the Fermi energy μF. At a fixed lead-impurity hybridization, a quantum phase transition from the two-channel Kondo (2CK) to the local moment (LM) phase is observed with increasing r from r = 0 to r = rc < 1. Surprisingly, in the 2CK phase, different power-law scalings from the well-known [Formula: see text] or [Formula: see text] form is found. Moreover, novel power-law scalings in conductances at the 2CK-LM quantum critical point are identified. Clear distinctions are found on the critical exponents between linear and non-linear conductance at criticality. The implications of these two distinct quantum critical properties for the non-equilibrium quantum criticality in general are discussed.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
Optimal Scaling of Interaction Effects in Generalized Linear Models
ERIC Educational Resources Information Center
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
Modeling thermal sensation in a Mediterranean climate-a comparison of linear and ordinal models.
Pantavou, Katerina; Lykoudis, Spyridon
2014-08-01
A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.
Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models
NASA Astrophysics Data System (ADS)
Pantavou, Katerina; Lykoudis, Spyridon
2014-08-01
A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.
Linear moose model with pairs of degenerate gauge boson triplets
NASA Astrophysics Data System (ADS)
Casalbuoni, Roberto; Coradeschi, Francesco; de Curtis, Stefania; Dominici, Daniele
2008-05-01
The possibility of a strongly interacting electroweak symmetry breaking sector, as opposed to the weakly interacting light Higgs of the standard model, is not yet ruled out by experiments. In this paper we make an extensive study of a deconstructed model (or “moose” model) providing an effective description of such a strong symmetry breaking sector, and show its compatibility with experimental data for a wide portion of the model parameter space. The model is a direct generalization of the previously proposed D-BESS model.
Linear moose model with pairs of degenerate gauge boson triplets
Casalbuoni, Roberto; Coradeschi, Francesco; De Curtis, Stefania; Dominici, Daniele
2008-05-01
The possibility of a strongly interacting electroweak symmetry breaking sector, as opposed to the weakly interacting light Higgs of the standard model, is not yet ruled out by experiments. In this paper we make an extensive study of a deconstructed model (or ''moose'' model) providing an effective description of such a strong symmetry breaking sector, and show its compatibility with experimental data for a wide portion of the model parameter space. The model is a direct generalization of the previously proposed D-BESS model.
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
ERIC Educational Resources Information Center
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2012-01-01
Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Model reference adaptive control for linear time varying and nonlinear systems
NASA Technical Reports Server (NTRS)
Abida, L.; Kaufman, H.
1982-01-01
Model reference adaptive control is applied to linear time varying systems and to nonlinear systems amenable to virtual linearization. Asymptotic stability is guaranteed even if the perfect model following conditions do not hold, provided that some sufficient conditions are satisfied. Simulations show the scheme to be capable of effectively controlling certain nonlinear systems.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
ERIC Educational Resources Information Center
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2012-01-01
Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…
ROMS Tangent Linear and Adjoint Models: Testing and Applications
2001-09-30
long-term scientific goal is to model and predict the mesoscale circulation and the ecosystem response to physical forcing in the various regions of the world ocean through ROMS primitive equation modeling/assimilation.
ROMS Tangent Linear and Adjoint Models: Testing and Applications
2002-09-30
long-term scientific goal is to model and predict the mesoscale circulation and the ecosystem response to physical forcing in the various regions of the world ocean through ROMS primitive equation modeling/assimilation.
The linear-quadratic model is inappropriate to model high dose per fraction effects in radiosurgery.
Kirkpatrick, John P; Meyer, Jeffrey J; Marks, Lawrence B
2008-10-01
The linear-quadratic (LQ) model is widely used to model the effect of total dose and dose per fraction in conventionally fractionated radiotherapy. Much of the data used to generate the model are obtained in vitro at doses well below those used in radiosurgery. Clinically, the LQ model often underestimates tumor control observed at radiosurgical doses. The underlying mechanisms implied by the LQ model do not reflect the vascular and stromal damage produced at the high doses per fraction encountered in radiosurgery and ignore the impact of radioresistant subpopulations of cells. The appropriate modeling of both tumor control and normal tissue toxicity in radiosurgery requires the application of emerging understanding of molecular-, cellular-, and tissue-level effects of high-dose/fraction-ionizing radiation and the role of cancer stem cells.
Development of a Linear Stirling System Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
NASA Astrophysics Data System (ADS)
Zattoni, Elena
2017-01-01
This paper investigates the problem of structural model matching by output feedback in linear impulsive systems with control feedthrough. Namely, given a linear impulsive plant, possibly featuring an algebraic link from the control input to the output, and given a linear impulsive model, the problem consists in finding a linear impulsive regulator that achieves exact matching between the respective forced responses of the linear impulsive plant and of the linear impulsive model, for all the admissible input functions and all the admissible sequences of jump times, by means of a dynamic feedback of the plant output. The problem solvability is characterized by a necessary and sufficient condition. The regulator synthesis is outlined through the proof of sufficiency, which is constructive.
Analysis of operating principles with S-system models.
Lee, Yun; Chen, Po-Wei; Voit, Eberhard O
2011-05-01
Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature.
Generalized linear mixed models can detect unimodal species-environment relationships.
Jamil, Tahira; Ter Braak, Cajo J F
2013-01-01
Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in ordination, with trait modulated response and when species phylogeny and species traits must be taken into account. Adding squared terms to a linear model is a possibility but gives uninterpretable parameters. This paper explains why and when generalized linear mixed models, even without squared terms, can effectively analyse unimodal data and also presents a graphical tool and statistical test to test for unimodal response while fitting just the generalized linear mixed model. The R-code for this is supplied in Supplemental Information 1.
Indistinguishability and identifiability analysis of linear compartmental models.
Zhang, L Q; Collins, J C; King, P H
1991-02-01
Two compartmental model structures are said to be indistinguishable if they have the same input-output properties. In cases in which available a priori information is not sufficient to specify a unique compartmental model structure, indistinguishable model structures may have to be generated and their attributes examined for relevance. An algorithm is developed that, for a given compartmental model, investigates the complete set of models with the same number of compartments and the same input-output structure as the original model, applies geometrical rules necessary for indistinguishable models, and test models meeting the geometrical criteria for equality of transfer functions. Identifiability is also checked in the algorithm. The software consists of three programs. Program 1 determines the number of locally identifiable parameters. Program 2 applies several geometrical rules that eliminate many (generally most) of the candidate models. Program 3 checks the equality between system transfer functions of the original model and models being tested. Ranks of Jacobian matrices and submatrices and other criteria are used to check patterns of moment invariants and local identifiability. Structural controllability and structural observability are checked throughout the programs. The approach was successfully used to corroborate results from examples investigated by others.
A deterministic aggregate production planning model considering quality of products
NASA Astrophysics Data System (ADS)
Madadi, Najmeh; Yew Wong, Kuan
2013-06-01
Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.
Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models
Yi, Nengjun; Ma, Shuangge
2013-01-01
Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Computational models of signalling networks for non-linear control.
Fuente, Luis A; Lones, Michael A; Turner, Alexander P; Stepney, Susan; Caves, Leo S; Tyrrell, Andy M
2013-05-01
Artificial signalling networks (ASNs) are a computational approach inspired by the signalling processes inside cells that decode outside environmental information. Using evolutionary algorithms to induce complex behaviours, we show how chaotic dynamics in a conservative dynamical system can be controlled. Such dynamics are of particular interest as they mimic the inherent complexity of non-linear physical systems in the real world. Considering the main biological interpretations of cellular signalling, in which complex behaviours and robust cellular responses emerge from the interaction of multiple pathways, we introduce two ASN representations: a stand-alone ASN and a coupled ASN. In particular we note how sophisticated cellular communication mechanisms can lead to effective controllers, where complicated problems can be divided into smaller and independent tasks.
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.
Elliott, Michael R
2009-03-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.
Fault Detection and Model Identification in Linear Dynamical Systems
2001-02-01
fault detection and isolation (FDI). One avenue of FDI is via the multi-model approach, in which the parameters of the nominal, unfailed model of the system are known, as well as the parameters of one or more fault models. The design goal is to obtain an indicator for when a fault has occurred, and, when more than one type is possible, which type of fault it is. A choice that must be made in tile system design is how to model noise. One way is as a bounded energy signal. This approach places very few restrictions on the types of noisy systems which
Cozad, A.; Sahinidis, N.; Miller, D.
2011-01-01
Costly and/or insufficiently robust simulations or experiments can often pose difficulties when their use extends well beyond a single evaluation. This is case with the numerous evaluations of uncertainty quantification, when an algebraic model is needed for optimization, as well as numerous other areas. To overcome these difficulties, we generate an accurate set of algebraic surrogate models of disaggregated process blocks of the experiment or simulation. We developed a method that uses derivative-based and derivative-free optimization alongside machine learning and statistical techniques to generate the set of surrogate models using data sampled from experiments or detailed simulations. Our method begins by building a low-complexity surrogate model for each block from an initial sample set. The model is built using a best subset technique that leverages a mixed-integer linear problem formulation to allow for very large initial basis sets. The models are then tested, exploited, and improved through the use of derivative-free solvers to adaptively sample new simulation or experimental points. The sets of surrogate models from each disaggregated process block are then combined with heat and mass balances around each disaggregated block to generate a full algebraic model of the process. The full model can be used for cheap and accurate evaluations of the original simulation or experiment or combined with design specifications and an objective for nonlinear optimization.
Numerical simulation of blood flow through a capillary using a non-linear viscoelastic model.
Shariatkhah, Amin; Norouzi, Mahmood; Nobari, Mohammad Reza Heyrani
2016-01-01
In this article, a periodic developing blood flow in a capillary is simulated using a non-linear viscoelastic model for the first time. Here, the Giesekus model is used as the constitutive equation, and based on the experimental data, the best value for the mobility factor and zero shear rate viscosity are derived. The numerical solution of the problem is obtained using the finite volume method. The algorithm of the solution is pressure implicit with splitting of operators (PISO). The simulation carried out using the Giesekus, Oldroyd-B and Newtonian models and the results indicate that the Giesekus model presents a more accurate solution for the stress and velocity fields than the Newtonian and Oldroyd-B models. The previous studies on this problem were restricted to the linear and quasi-linear viscoelastic models. It is shown that only non-linear viscoelastic models can accurately describe the experimental data of unsteady blood flow in capillaries.
Semi-physical neural modeling for linear signal restoration.
Bourgois, Laurent; Roussel, Gilles; Benjelloun, Mohammed
2013-02-01
This paper deals with the design methodology of an Inverse Neural Network (INN) model. The basic idea is to carry out a semi-physical model gathering two types of information: the a priori knowledge of the deterministic rules which govern the studied system and the observation of the actual conduct of this system obtained from experimental data. This hybrid model is elaborated by being inspired by the mechanisms of a neuromimetic network whose structure is constrained by the discrete reverse-time state-space equations. In order to validate the approach, some tests are performed on two dynamic models. The first suggested model is a dynamic system characterized by an unspecified r-order Ordinary Differential Equation (ODE). The second one concerns in particular the mass balance equation for a dispersion phenomenon governed by a Partial Differential Equation (PDE) discretized on a basic mesh. The performances are numerically analyzed in terms of generalization, regularization and training effort.
Using multiple linear regression model to estimate thunderstorm activity
NASA Astrophysics Data System (ADS)
Suparta, W.; Putro, W. S.
2017-03-01
This paper is aimed to develop a numerical model with the use of a nonlinear model to estimate the thunderstorm activity. Meteorological data such as Pressure (P), Temperature (T), Relative Humidity (H), cloud (C), Precipitable Water Vapor (PWV), and precipitation on a daily basis were used in the proposed method. The model was constructed with six configurations of input and one target output. The output tested in this work is the thunderstorm event when one-year data is used. Results showed that the model works well in estimating thunderstorm activities with the maximum epoch reaching 1000 iterations and the percent error was found below 50%. The model also found that the thunderstorm activities in May and October are detected higher than the other months due to the inter-monsoon season.
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Frequency response of synthetic vocal fold models with linear and nonlinear material properties.
Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon
2012-10-01
The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2014-01-01
Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874
Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.
Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K
2000-01-01
The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
Fixed and random effects selection in linear and logistic models.
Kinney, Satkartar K; Dunson, David B
2007-09-01
We address the problem of selecting which variables should be included in the fixed and random components of logistic mixed effects models for correlated data. A fully Bayesian variable selection is implemented using a stochastic search Gibbs sampler to estimate the exact model-averaged posterior distribution. This approach automatically identifies subsets of predictors having nonzero fixed effect coefficients or nonzero random effects variance, while allowing uncertainty in the model selection process. Default priors are proposed for the variance components and an efficient parameter expansion Gibbs sampler is developed for posterior computation. The approach is illustrated using simulated data and an epidemiologic example.
Partially Linear Varying Coefficient Models Stratified by a Functional Covariate
Maity, Arnab; Huang, Jianhua Z.
2012-01-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application. PMID:22904586
The puzzle of apparent linear lattice artifacts in the 2d non-linear σ-model and Symanzik's solution
NASA Astrophysics Data System (ADS)
Balog, Janos; Niedermayer, Ferenc; Weisz, Peter
2010-01-01
Lattice artifacts in the 2d O( n) non-linear σ-model are expected to be of the form O(a), and hence it was (when first observed) disturbing that some quantities in the O(3) model with various actions show parametrically stronger cutoff dependence, apparently O(a), up to very large correlation lengths. In a previous letter Balog et al. (2009) [1] we described the solution to this puzzle. Based on the conventional framework of Symanzik's effective action, we showed that there are logarithmic corrections to the O(a) artifacts which are especially large ( lna) for n=3 and that such artifacts are consistent with the data. In this paper we supply the technical details of this computation. Results of Monte Carlo simulations using various lattice actions for O(3) and O(4) are also presented.
Modeling of thermal storage systems in MILP distributed energy resource models
Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...
2014-08-04
Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculations aremore » based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less
Fokker-Planck Modelling of PISCES Linear Divertor Simulator
NASA Astrophysics Data System (ADS)
Batishchev, O. V.; Krasheninnikov, S. I.; Schmitz, L.
1996-11-01
The gas target operating regime in the PISCES [1] linear divertor simulator is characterized by a relatively high plasma density, 2.5 × 10^19 m-3, and low temperature, 8 eV, in the middle section of an ≈ 1 m long plasma column. Near the target, the plasma temperature and density as measured by Langmuir probes drop to 2 eV and 3.5 × 10^18 m-3, respectively, as a result of electron energy loss due to dissociation, ionization, and radiation. Such a sharp gradient in the plasma parameters can enhance non-local effects. To study these, we performed kinetic simulations of the relaxation of the electron energy distribution function on the experimentally measured background plasma using the adaptive finite-volumes code ALLA [2]. We discuss the effects of the observed incompletely equilibrated electron distribution function on key plasma parameter measurements and plasma - neutral particle interactions. cm [1] L.Schmitz et al., Physics of Plasmas 2 (1995) 3081. cm [2] A.A.Batishcheva et al., Physics of Plasmas 3 (1996) 1634. cm ^*Under U.S. DoE Contracts No.DE-FG02-91-ER-54109 at MIT, DE-FG02-88-ER-53263 at Lodestar, and DE-FG03-95ER54301 at UCSD.
Modeling results for a linear simulator of a divertor
Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.
1993-06-23
A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
A rate insensitive linear viscoelastic model for soft tissues
Zhang, Wei; Chen, Henry Y.; Kassab, Ghassan S.
2012-01-01
It is well known that many biological soft tissues behave as viscoelastic materials with hysteresis curves being nearly independent of strain rate when loading frequency is varied over a large range. In this work, the rate insensitive feature of biological materials is taken into account by a generalized Maxwell model. To minimize the number of model parameters, it is assumed that the characteristic frequencies of Maxwell elements form a geometric series. As a result, the model is characterized by five material constants: μ0, τ, m, ρ and β, where μ0 is the relaxed elastic modulus, τ the characteristic relaxation time, m the number of Maxwell elements, ρ the gap between characteristic frequencies, and β = μ1/μ0 with μ1 being the elastic modulus of the Maxwell body that has relaxation time τ. The physical basis of the model is motivated by the microstructural architecture of typical soft tissues. The novel model shows excellent fit of relaxation data on the canine aorta and captures the salient features of vascular viscoelasticity with significantly fewer model parameters. PMID:17512585
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
Linear summation of outputs in a balanced network model of motor cortex.
Capaday, Charles; van Vreeswijk, Carl
2015-01-01
Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.
Non-linear modelling and optimal control of a hydraulically actuated seismic isolator test rig
NASA Astrophysics Data System (ADS)
Pagano, Stefano; Russo, Riccardo; Strano, Salvatore; Terzo, Mario
2013-02-01
This paper investigates the modelling, parameter identification and control of an unidirectional hydraulically actuated seismic isolator test rig. The plant is characterized by non-linearities such as the valve dead zone and frictions. A non-linear model is derived and then employed for parameter identification. The results concerning the model validation are illustrated and they fully confirm the effectiveness of the proposed model. The testing procedure of the isolation systems is based on the definition of a target displacement time history of the sliding table and, consequently, the precision of the table positioning is of primary importance. In order to minimize the test rig tracking error, a suitable control system has to be adopted. The system non-linearities highly limit the performances of the classical linear control and a non-linear one is therefore adopted. The test rig mathematical model is employed for a non-linear control design that minimizes the error between the target table position and the current one. The controller synthesis is made by taking no specimen into account. The proposed approach consists of a non-linear optimal control based on the state-dependent Riccati equation (SDRE). Numerical simulations have been performed in order to evaluate the soundness of the designed control with and without the specimen under test. The results confirm that the performances of the proposed non-linear controller are not invalidated because of the presence of the specimen.
Direct-Steam Linear Fresnel Performance Model for NREL's System Advisor Model
Wagner, M. J.; Zhu, G.
2012-09-01
This paper presents the technical formulation and demonstrated model performance results of a new direct-steam-generation (DSG) model in NREL's System Advisor Model (SAM). The model predicts the annual electricity production of a wide range of system configurations within the DSG Linear Fresnel technology by modeling hourly performance of the plant in detail. The quasi-steady-state formulation allows users to investigate energy and mass flows, operating temperatures, and pressure drops for geometries and solar field configurations of interest. The model includes tools for heat loss calculation using either empirical polynomial heat loss curves as a function of steam temperature, ambient temperature, and wind velocity, or a detailed evacuated tube receiver heat loss model. Thermal losses are evaluated using a computationally efficient nodal approach, where the solar field and headers are discretized into multiple nodes where heat losses, thermal inertia, steam conditions (including pressure, temperature, enthalpy, etc.) are individually evaluated during each time step of the simulation. This paper discusses the mathematical formulation for the solar field model and describes how the solar field is integrated with the other subsystem models, including the power cycle and optional auxiliary fossil system. Model results are also presented to demonstrate plant behavior in the various operating modes.
Genetic demixing and evolution in linear stepping stone models
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-01-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q-allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial
Genetic demixing and evolution in linear stepping stone models
NASA Astrophysics Data System (ADS)
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-04-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.