Sample records for linear models conception

  1. Linear separability in superordinate natural language concepts.

    PubMed

    Ruts, Wim; Storms, Gert; Hampton, James

    2004-01-01

    Two experiments are reported in which linear separability was investigated in superordinate natural language concept pairs (e.g., toiletry-sewing gear). Representations of the exemplars of semantically related concept pairs were derived in two to five dimensions using multidimensional scaling (MDS) of similarities based on possession of the concept features. Next, category membership, obtained from an exemplar generation study (in Experiment 1) and from a forced-choice classification task (in Experiment 2) was predicted from the coordinates of the MDS representation using log linear analysis. The results showed that all natural kind concept pairs were perfectly linearly separable, whereas artifact concept pairs showed several violations. Clear linear separability of natural language concept pairs is in line with independent cue models. The violations in the artifact pairs, however, yield clear evidence against the independent cue models.

  2. Generic Airplane Model Concept and Four Specific Models Developed for Use in Piloted Simulation Studies

    NASA Technical Reports Server (NTRS)

    Hoffler, Keith D.; Fears, Scott P.; Carzoo, Susan W.

    1997-01-01

    A generic airplane model concept was developed to allow configurations with various agility, performance, handling qualities, and pilot vehicle interface to be generated rapidly for piloted simulation studies. The simple concept allows stick shaping and various stick command types or modes to drive an airplane with both linear and nonlinear components. Output from the stick shaping goes to linear models or a series of linear models that can represent an entire flight envelope. The generic model also has provisions for control power limitations, a nonlinear feature. Therefore, departures from controlled flight are possible. Note that only loss of control is modeled, the generic airplane does not accurately model post departure phenomenon. The model concept is presented herein, along with four example airplanes. Agility was varied across the four example airplanes without altering specific excess energy or significantly altering handling qualities. A new feedback scheme to provide angle-of-attack cueing to the pilot, while using a pitch rate command system, was implemented and tested.

  3. Free-piston engine linear generator for hybrid vehicles modeling study

    NASA Astrophysics Data System (ADS)

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  4. Student Connections of Linear Algebra Concepts: An Analysis of Concept Maps

    ERIC Educational Resources Information Center

    Lapp, Douglas A.; Nyman, Melvin A.; Berry, John S.

    2010-01-01

    This article examines the connections of linear algebra concepts in a first course at the undergraduate level. The theoretical underpinnings of this study are grounded in the constructivist perspective (including social constructivism), Vernaud's theory of conceptual fields and Pirie and Kieren's model for the growth of mathematical understanding.…

  5. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  6. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  7. Getting a Bead on It

    ERIC Educational Resources Information Center

    Ferrucci, Beverly J.; McDougall, Jennifer; Carter, Jack

    2009-01-01

    One challenge that middle school teachers commonly face is finding insightful, hands-on applications when teaching basic mathematical concepts. One concept that is a foundation of middle school mathematics is the notion of "linear functions." Although a variety of models can be used for linear equations, such as temperature conversions,…

  8. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  9. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  10. How linear response shaped models of neural circuits and the quest for alternatives.

    PubMed

    Herfurth, Tim; Tchumatchenko, Tatjana

    2017-10-01

    In the past decades, many mathematical approaches to solve complex nonlinear systems in physics have been successfully applied to neuroscience. One of these tools is the concept of linear response functions. However, phenomena observed in the brain emerge from fundamentally nonlinear interactions and feedback loops rather than from a composition of linear filters. Here, we review the successes achieved by applying the linear response formalism to topics, such as rhythm generation and synchrony and by incorporating it into models that combine linear and nonlinear transformations. We also discuss the challenges encountered in the linear response applications and argue that new theoretical concepts are needed to tackle feedback loops and non-equilibrium dynamics which are experimentally observed in neural networks but are outside of the validity regime of the linear response formalism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Mathematical Modelling in Engineering: A Proposal to Introduce Linear Algebra Concepts

    ERIC Educational Resources Information Center

    Cárcamo Bahamonde, Andrea; Gómez Urgelles, Joan; Fortuny Aymemí, Josep

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasise the development of mathematical abilities primarily associated with modelling and interpreting, which are not exclusively calculus abilities. Considering this, an instructional design was created based on mathematical modelling and…

  12. Introducing linear functions: an alternative statistical approach

    NASA Astrophysics Data System (ADS)

    Nolan, Caroline; Herbert, Sandra

    2015-12-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

  13. Mathematical Modelling and the Learning Trajectory: Tools to Support the Teaching of Linear Algebra

    ERIC Educational Resources Information Center

    Cárcamo Bahamonde, Andrea Dorila; Fortuny Aymemí, Josep Maria; Gómez i Urgellés, Joan Vicenç

    2017-01-01

    In this article we present a didactic proposal for teaching linear algebra based on two compatible theoretical models: emergent models and mathematical modelling. This proposal begins with a problematic situation related to the creation and use of secure passwords, which leads students toward the construction of the concepts of spanning set and…

  14. Hierarchical Linear Modeling (HLM): An Introduction to Key Concepts within Cross-Sectional and Growth Modeling Frameworks. Technical Report #1308

    ERIC Educational Resources Information Center

    Anderson, Daniel

    2012-01-01

    This manuscript provides an overview of hierarchical linear modeling (HLM), as part of a series of papers covering topics relevant to consumers of educational research. HLM is tremendously flexible, allowing researchers to specify relations across multiple "levels" of the educational system (e.g., students, classrooms, schools, etc.).…

  15. The Planning Wheel: Value Added Performance.

    ERIC Educational Resources Information Center

    Murk, Peter J.; Walls, Jeffrey L.

    The "Planning Wheel" is an evolution of the original Systems Approach Model (SAM) that was introduced in 1986 by Murk and Galbraith. Unlike most current planning models, which are linear in design and concept, the Planning Wheel bridges the gap between linear and nonlinear processes. The "Program Planning Wheel" is designed to…

  16. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  17. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  18. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  19. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  20. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  1. Nonlinearity measure and internal model control based linearization in anti-windup design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perev, Kamen

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequencymore » ranges.« less

  2. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  3. Enriching student concept images: Teaching and learning fractions through a multiple-embodiment approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofen; Clements, M. A. (Ken); Ellerton, Nerida F.

    2015-06-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols , , and changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written questions and pre- and post-teaching one-to-one verbal interview questions. Results showed that at the pre-teaching stage, the student concept images of unit fractions were very narrow and mainly linked to area models. However, after the instructional intervention, the fifth graders were able to select and apply a variety of models in response to unit fraction tasks, and their concept images of unit fractions were enriched and linked to capacity, perimeter, linear and discrete models, as well as to area models. Their performances on tests had improved, and their conceptual understandings of unit fractions had developed.

  4. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  5. Modern control concepts in hydrology

    NASA Technical Reports Server (NTRS)

    Duong, N.; Johnson, G. R.; Winn, C. B.

    1974-01-01

    Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  6. Concepts in solid tumor evolution.

    PubMed

    Sidow, Arend; Spies, Noah

    2015-04-01

    Evolutionary mechanisms in cancer progression give tumors their individuality. Cancer evolution is different from organismal evolution, however, and we discuss where concepts from evolutionary genetics are useful or limited in facilitating an understanding of cancer. Based on these concepts we construct and apply the simplest plausible model of tumor growth and progression. Simulations using this simple model illustrate the importance of stochastic events early in tumorigenesis, highlight the dominance of exponential growth over linear growth and differentiation, and explain the clonal substructure of tumors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Examining the Big-Fish-Little-Pond Effect on Students' Self-Concept of Learning Science in Taiwan Based on the TIMSS Databases

    ERIC Educational Resources Information Center

    Liou, Pey-Yan

    2014-01-01

    The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the…

  8. How Small Is a Billionth?

    ERIC Educational Resources Information Center

    Gough, John

    2007-01-01

    Children's natural curiosity about numbers, big and small can lead to exploring place-value ideas. But how can these abstract concepts be experienced more concretely? This article presents some practical approaches for conceptualising very small numbers using linear models, area models, volume models, and diagrams.

  9. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  10. Looking for Connections between Linear and Exponential Functions

    ERIC Educational Resources Information Center

    Lo, Jane-Jane; Kratky, James L.

    2012-01-01

    Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…

  11. Tried and True: Springing into Linear Models

    ERIC Educational Resources Information Center

    Darling, Gerald

    2012-01-01

    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  12. Compensatory Reading among ESL Learners: A Reading Strategy Heuristic

    ERIC Educational Resources Information Center

    Ismail, Shaik Abdul Malik Mohamed; Petras, Yusof Ede; Mohamed, Abdul Rashid; Eng, Lin Siew

    2015-01-01

    This paper aims to gain an insight to the relationship of two different concepts about reading comprehension, namely, the linear model of comprehension and the interactive compensatory theory. Drawing on both the above concepts, a heuristic was constructed about three different reading strategies determined by the specific ways the literal,…

  13. Young Children's Psychological Selves: Convergence with Maternal Reports of Child Personality

    ERIC Educational Resources Information Center

    Brown, Geoffrey L.; Mangelsdorf, Sarah C.; Agathen, Jean M.; Ho, Moon-Ho

    2008-01-01

    The present research examined five-year-old children's psychological self-concepts. Non-linear factor analysis was used to model the latent structure of the children's self-view questionnaire (CSVQ; Eder, 1990), a measure of children's self-concepts. The coherence and reliability of the emerging factor structure indicated that young children are…

  14. Linear thermal circulator based on Coriolis forces.

    PubMed

    Li, Huanan; Kottos, Tsampikos

    2015-02-01

    We show that the presence of a Coriolis force in a rotating linear lattice imposes a nonreciprocal propagation of the phononic heat carriers. Using this effect we propose the concept of Coriolis linear thermal circulator which can control the circulation of a heat current. A simple model of three coupled harmonic masses on a rotating platform permits us to demonstrate giant circulating rectification effects for moderate values of the angular velocities of the platform.

  15. Emergent Modelling: From Traditional Indonesian Games to a Standard Unit of Measurement

    ERIC Educational Resources Information Center

    Wijaya, Ariyadi; Doorman, L. Michiel; Keijze, Ronald

    2011-01-01

    In this paper, we describe the way in which traditional Indonesian games can support the learning of linear measurement. Previous research has revealed that young children tend to perform measurement as an instrumental procedure. This tendency may be due to the way in which linear measurement has been taught as an isolated concept, which is…

  16. Getting off the Straight and Narrow: Exploiting Non-Linear, Interactive Narrative Structures in Digital Stories for Language Teaching

    ERIC Educational Resources Information Center

    Prosser, Andrew

    2014-01-01

    Digital storytelling is already used extensively in language education. Web documentaries, particularly in terms of design and narrative structure, provide an extension of the digital storytelling concept, specifically in terms of increased interactivity. Using a model of interactive, non-linear storytelling, originally derived from computer game…

  17. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  18. Supporting second grade lower secondary school students’ understanding of linear equation system in two variables using ethnomathematics

    NASA Astrophysics Data System (ADS)

    Nursyahidah, F.; Saputro, B. A.; Rubowo, M. R.

    2018-03-01

    The aim of this research is to know the students’ understanding of linear equation system in two variables using Ethnomathematics and to acquire learning trajectory of linear equation system in two variables for the second grade of lower secondary school students. This research used methodology of design research that consists of three phases, there are preliminary design, teaching experiment, and retrospective analysis. Subject of this study is 28 second grade students of Sekolah Menengah Pertama (SMP) 37 Semarang. The result of this research shows that the students’ understanding in linear equation system in two variables can be stimulated by using Ethnomathematics in selling buying tradition in Peterongan traditional market in Central Java as a context. All of strategies and model that was applied by students and also their result discussion shows how construction and contribution of students can help them to understand concept of linear equation system in two variables. All the activities that were done by students produce learning trajectory to gain the goal of learning. Each steps of learning trajectory of students have an important role in understanding the concept from informal to the formal level. Learning trajectory using Ethnomathematics that is produced consist of watching video of selling buying activity in Peterongan traditional market to construct linear equation in two variables, determine the solution of linear equation in two variables, construct model of linear equation system in two variables from contextual problem, and solving a contextual problem related to linear equation system in two variables.

  19. Equivalent uniform dose concept evaluated by theoretical dose volume histograms for thoracic irradiation.

    PubMed

    Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F

    2007-03-01

    The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.

  20. Uniform circular motion concept attainment through circle share learning model using real media

    NASA Astrophysics Data System (ADS)

    Ponimin; Suparmi; Sarwanto; Sunarno, W.

    2017-01-01

    Uniform circular motion is an important concept and has many applications in life. Student’s concept understanding of uniform circular motion is not optimal because the teaching learning is not carried out properly in accordance with the characteristics of the concept. To improve student learning outcomes required better teaching learning which is match with the characteristics of uniform circular motion. The purpose of the study is to determine the effect of real media and circle share model to the understanding of the uniform circular motion concept. The real media was used to visualize of uniform circular motion concept. The real media consists of toy car, round table and spring balance. Circle share model is a learning model through discussion sequentially and programmed. Each group must evaluate the worksheets of another group in a circular position. The first group evaluates worksheets the second group, the second group evaluates worksheets third group, and the end group evaluates the worksheets of the first group. Assessment of learning outcomes includes experiment worksheets and post-test of students. Based on data analysis we obtained some findings. First, students can explain the understanding of uniform circular motion whose angular velocity and speed is constant correctly. Second, students can distinguish the angular velocity and linear velocity correctly. Third, students can explain the direction of the linear velocity vector and the direction of the centripetal force vector. Fourth, the student can explain the influence of the mass, radius, and velocity toward the centripetal force. Fifth, students can explain the principle of combined of wheels. Sixth, teaching learning used circle share, can increase student activity, experimental results and efficiency of discussion time.

  1. Stability margin of linear systems with parameters described by fuzzy numbers.

    PubMed

    Husek, Petr

    2011-10-01

    This paper deals with the linear systems with uncertain parameters described by fuzzy numbers. The problem of determining the stability margin of those systems with linear affine dependence of the coefficients of a characteristic polynomial on system parameters is studied. Fuzzy numbers describing the system parameters are allowed to be characterized by arbitrary nonsymmetric membership functions. An elegant solution, graphical in nature, based on generalization of the Tsypkin-Polyak plot is presented. The advantage of the presented approach over the classical robust concept is demonstrated on a control of the Fiat Dedra engine model and a control of the quarter car suspension model.

  2. Development of a hydrological model for simulation of runoff from catchments unbounded by ridge lines

    NASA Astrophysics Data System (ADS)

    Vema, Vamsikrishna; Sudheer, K. P.; Chaubey, I.

    2017-08-01

    Watershed hydrological models are effective tools for simulating the hydrological processes in the watershed. Although there are a plethora of hydrological models, none of them can be directly applied to make water conservation decisions in irregularly bounded areas that do not confirm to topographically defined ridge lines. This study proposes a novel hydrological model that can be directly applied to any catchment, with or without ridge line boundaries. The model is based on the water balance concept, and a linear function concept to approximate the cross-boundary flow from upstream areas to the administrative catchment under consideration. The developed model is tested in 2 watersheds - Riesel Experimental Watershed and a sub-basin of Cedar Creek Watershed in Texas, USA. Hypothetical administrative catchments that did not confirm to the location of ridge lines were considered for verifying the efficacy of the model for hydrologic simulations. The linear function concept used to account the cross boundary flow was based on the hypothesis that the flow coming from outside the boundary to administrative area was proportional to the flow generated in the boundary grid cell. The model performance was satisfactory with an NSE and r2 of ≥0.80 and a PBIAS of <25 in all the cases. The simulated hydrographs for the administrative catchments of the watersheds were in good agreement with the observed hydrographs, indicating a satisfactory performance of the model in the administratively bounded areas.

  3. On the concept of sloped motion for free-floating wave energy converters.

    PubMed

    Payne, Grégory S; Pascal, Rémy; Vaillant, Guillaume

    2015-10-08

    A free-floating wave energy converter (WEC) concept whose power take-off (PTO) system reacts against water inertia is investigated herein. The main focus is the impact of inclining the PTO direction on the system performance. The study is based on a numerical model whose formulation is first derived in detail. Hydrodynamics coefficients are obtained using the linear boundary element method package WAMIT. Verification of the model is provided prior to its use for a PTO parametric study and a multi-objective optimization based on a multi-linear regression method. It is found that inclining the direction of the PTO at around 50° to the vertical is highly beneficial for the WEC performance in that it provides a high capture width ratio over a broad region of the wave period range.

  4. On the concept of sloped motion for free-floating wave energy converters

    PubMed Central

    Payne, Grégory S.; Pascal, Rémy; Vaillant, Guillaume

    2015-01-01

    A free-floating wave energy converter (WEC) concept whose power take-off (PTO) system reacts against water inertia is investigated herein. The main focus is the impact of inclining the PTO direction on the system performance. The study is based on a numerical model whose formulation is first derived in detail. Hydrodynamics coefficients are obtained using the linear boundary element method package WAMIT. Verification of the model is provided prior to its use for a PTO parametric study and a multi-objective optimization based on a multi-linear regression method. It is found that inclining the direction of the PTO at around 50° to the vertical is highly beneficial for the WEC performance in that it provides a high capture width ratio over a broad region of the wave period range. PMID:26543397

  5. A systems concept of the vestibular organs

    NASA Technical Reports Server (NTRS)

    Mayne, R.

    1974-01-01

    A comprehensive model of vestibular organ function is presented. The model is based on an analogy with the inertial guidance systems used in navigation. Three distinct operations are investigated: angular motion sensing, linear motion sensing, and computation. These operations correspond to the semicircular canals, the otoliths, and central processing respectively. It is especially important for both an inertial guidance system and the vestibular organs to distinguish between attitude with respect to the vertical on the one hand, and linear velocity and displacement on the other. The model is applied to various experimental situations and found to be corroborated by them.

  6. From SED HI concept to Pleiades FM detection unit measurements

    NASA Astrophysics Data System (ADS)

    Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex

    2017-11-01

    The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Volker

    These lectures are an attempt to a pedagogical introduction into the elementary concepts of chiral symmetry in nuclear physics. We will also discuss some effective chiral models such as the linear and nonlinear sigma model as well as the essential ideas of chiral perturbation theory. We will present some applications to the physics of ultrarelativistic heavy ion collisionsd.

  8. A First Step towards Variational Methods in Engineering

    ERIC Educational Resources Information Center

    Periago, Francisco

    2003-01-01

    In this paper, a didactical proposal is presented to introduce the variational methods for solving boundary value problems to engineering students. Starting from a couple of simple models arising in linear elasticity and heat diffusion, the concept of weak solution for these models is motivated and the existence, uniqueness and continuous…

  9. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  10. Equilibrium control of nonlinear verticum-type systems, applied to integrated pest control.

    PubMed

    Molnár, S; Gámez, M; López, I; Cabello, T

    2013-08-01

    Linear verticum-type control and observation systems have been introduced for modelling certain industrial systems, consisting of subsystems, vertically connected by certain state variables. Recently the concept of verticum-type observation systems and the corresponding observability condition have been extended by the authors to the nonlinear case. In the present paper the general concept of a nonlinear verticum-type control system is introduced, and a sufficient condition for local controllability to equilibrium is obtained. In addition to a usual linearization, the basic idea is a decomposition of the control of the whole system into the control of the subsystems. Starting from the integrated pest control model of Rafikov and Limeira (2012) and Rafikov et al. (2012), a nonlinear verticum-type model has been set up an equilibrium control is obtained. Furthermore, a corresponding bioeconomical problem is solved minimizing the total cost of integrated pest control (combining chemical control with a biological one). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  12. A Textbook for a First Course in Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)

    1999-01-01

    This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.

  13. Aerodynamic mathematical modeling - basic concepts

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Schiff, L. B.

    1981-01-01

    The mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers is reviewed. Bryan's original formulation, linear aerodynamic indicial functions, and superposition are considered. These concepts are extended into the nonlinear regime. The nonlinear generalization yields a form for the aerodynamic response that can be built up from the responses to a limited number of well defined characteristic motions, reproducible in principle either in wind tunnel experiments or flow field computations. A further generalization leads to a form accommodating the discontinuous and double valued behavior characteristics of hysteresis in the steady state aerodynamic response.

  14. Estimation of hysteretic damping of structures by stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Bajrić, Anela; Høgsberg, Jan

    2018-05-01

    Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.

  15. Optical telescope refocussing mechanism concept design on remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Kuo, Jen-Chueh; Ling, Jer

    2017-09-01

    The optical telescope system in remote sensing satellite must be precisely aligned to obtain high quality images during its mission life. In practical, because the telescope mirrors could be misaligned due to launch loads, thermal distortion on supporting structures or hygroscopic distortion effect in some composite materials, the optical telescope system is often equipped with refocussing mechanism to re-align the optical elements while optical element positions are out of range during image acquisition. This paper is to introduce satellite Refocussing mechanism function model design development process and the engineering models. The design concept of the refocussing mechanism can be applied on either cassegrain type telescope or korsch type telescope, and the refocussing mechanism is located at the rear of the secondary mirror in this paper. The purpose to put the refocussing mechanism on the secondary mirror is due to its higher sensitivity on MTF degradation than other optical elements. There are two types of refocussing mechanism model to be introduced: linear type model and rotation type model. For the linear refocussing mechanism function model, the model is composed of ceramic piezoelectric linear step motor, optical rule as well as controller. The secondary mirror is designed to be precisely moved in telescope despace direction through refocussing mechanism. For the rotation refocussing mechanism function model, the model is assembled with two ceramic piezoelectric rotational motors around two orthogonal directions in order to adjust the secondary mirror attitude in tilt angle and yaw angle. From the validation test results, the linear type refocussing mechanism function model can be operated to adjust the secondary mirror position with minimum 500 nm resolution with close loop control. For the rotation type model, the attitude angle of the secondary mirror can be adjusted with the minimum 6 sec of arc resolution and 5°/sec of angle velocity.

  16. A Characterization of a Unified Notion of Mathematical Function: The Case of High School Function and Linear Transformation

    ERIC Educational Resources Information Center

    Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris

    2017-01-01

    As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…

  17. The Determinants of Child Health in Pakistan: An Economic Analysis

    ERIC Educational Resources Information Center

    Shehzad, Shafqat

    2006-01-01

    This paper estimates linear structural models using LISREL and employs MIMIC models to find out factors determining child health in Pakistan. A distinction has been made in permanent and transitory health states that lend support to Grossman's (1972) stock and flow concepts of health. The paper addresses the issue of health unobservability and…

  18. Forecasting Enrollments with Fuzzy Time Series.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  19. Pointillist, Cyclical, and Overlapping: Multidimensional Facets of Time in Online Learning

    ERIC Educational Resources Information Center

    Ihanainen, Pekka; Moravec, John W.

    2011-01-01

    A linear, sequential time conception based on in-person meetings and pedagogical activities is not enough for those who practice and hope to enhance contemporary education, particularly where online interactions are concerned. In this article, we propose a new model for understanding time in pedagogical contexts. Conceptual parts of the model will…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, T.; Zimoch, D.

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180 deg. requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analyticalmore » model covering all types of APPLE II and its implementation will be presented.« less

  1. About APPLE II Operation

    NASA Astrophysics Data System (ADS)

    Schmidt, T.; Zimoch, D.

    2007-01-01

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180° requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  2. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  3. Estimating effects of limiting factors with regression quantiles

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Schroeder, R.L.

    1999-01-01

    In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e.g., 90-95th) were greater than if effects were estimated by changes in the means from standard linear model procedures. Estimating a range of regression quantiles (e.g., 5-95th) provides a comprehensive description of biological response patterns for exploratory and inferential analyses in observational studies of limiting factors, especially when sampling large spatial and temporal scales.

  4. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  5. Using Example Generation to Explore Students' Understanding of the Concepts of Linear Dependence/Independence in Linear Algebra

    ERIC Educational Resources Information Center

    Aydin, Sinan

    2014-01-01

    Linear algebra is a basic mathematical subject taught in mathematics and science depar-tments of universities. The teaching and learning of this course has always been difficult. This study aims to contribute to the research in linear algebra education, focusing on linear dependence and independence concepts. This was done by introducing…

  6. Comparison of Aircraft Models and Integration Schemes for Interval Management in the TRACON

    NASA Technical Reports Server (NTRS)

    Neogi, Natasha; Hagen, George E.; Herencia-Zapana, Heber

    2012-01-01

    Reusable models of common elements for communication, computation, decision and control in air traffic management are necessary in order to enable simulation, analysis and assurance of emergent properties, such as safety and stability, for a given operational concept. Uncertainties due to faults, such as dropped messages, along with non-linearities and sensor noise are an integral part of these models, and impact emergent system behavior. Flight control algorithms designed using a linearized version of the flight mechanics will exhibit error due to model uncertainty, and may not be stable outside a neighborhood of the given point of linearization. Moreover, the communication mechanism by which the sensed state of an aircraft is fed back to a flight control system (such as an ADS-B message) impacts the overall system behavior; both due to sensor noise as well as dropped messages (vacant samples). Additionally simulation of the flight controller system can exhibit further numerical instability, due to selection of the integration scheme and approximations made in the flight dynamics. We examine the theoretical and numerical stability of a speed controller under the Euler and Runge-Kutta schemes of integration, for the Maintain phase for a Mid-Term (2035-2045) Interval Management (IM) Operational Concept for descent and landing operations. We model uncertainties in communication due to missed ADS-B messages by vacant samples in the integration schemes, and compare the emergent behavior of the system, in terms of stability, via the boundedness of the final system state. Any bound on the errors incurred by these uncertainties will play an essential part in a composable assurance argument required for real-time, flight-deck guidance and control systems,. Thus, we believe that the creation of reusable models, which possess property guarantees, such as safety and stability, is an innovative and essential requirement to assessing the emergent properties of novel airspace concepts of operation.

  7. Nonlinear isochrones in murine left ventricular pressure-volume loops: how well does the time-varying elastance concept hold?

    PubMed

    Claessens, T E; Georgakopoulos, D; Afanasyeva, M; Vermeersch, S J; Millar, H D; Stergiopulos, N; Westerhof, N; Verdonck, P R; Segers, P

    2006-04-01

    The linear time-varying elastance theory is frequently used to describe the change in ventricular stiffness during the cardiac cycle. The concept assumes that all isochrones (i.e., curves that connect pressure-volume data occurring at the same time) are linear and have a common volume intercept. Of specific interest is the steepest isochrone, the end-systolic pressure-volume relationship (ESPVR), of which the slope serves as an index for cardiac contractile function. Pressure-volume measurements, achieved with a combined pressure-conductance catheter in the left ventricle of 13 open-chest anesthetized mice, showed a marked curvilinearity of the isochrones. We therefore analyzed the shape of the isochrones by using six regression algorithms (two linear, two quadratic, and two logarithmic, each with a fixed or time-varying intercept) and discussed the consequences for the elastance concept. Our main observations were 1) the volume intercept varies considerably with time; 2) isochrones are equally well described by using quadratic or logarithmic regression; 3) linear regression with a fixed intercept shows poor correlation (R(2) < 0.75) during isovolumic relaxation and early filling; and 4) logarithmic regression is superior in estimating the fixed volume intercept of the ESPVR. In conclusion, the linear time-varying elastance fails to provide a sufficiently robust model to account for changes in pressure and volume during the cardiac cycle in the mouse ventricle. A new framework accounting for the nonlinear shape of the isochrones needs to be developed.

  8. Coupled modelling of groundwater flow-heat transport for assessing river-aquifer interactions

    NASA Astrophysics Data System (ADS)

    Engeler, I.; Hendricks Franssen, H. J.; Müller, R.; Stauffer, F.

    2010-05-01

    A three-dimensional finite element model for coupled variably saturated groundwater flow and heat transport was developed for the aquifer below the city of Zurich. The piezometric heads in the aquifer are strongly influenced by the river Limmat. In the model region, the river Limmat looses water to the aquifer. The river-aquifer interaction was modelled with the standard linear leakage concept. Coupling was implemented by considering temperature dependence of the hydraulic conductivity and of the leakage coefficient (via water viscosity) and density dependent transport. Calibration was performed for isothermal conditions by inverse modelling using the pilot point method. Independent model testing was carried out with help of the available dense monitoring network for piezometric heads and groundwater temperature. The model was tested by residuals analysis with the help of measurements for both groundwater temperature and head. The comparison of model results and measurements showed high accuracy for temperature except for the Southern part of the model area, where important geological heterogeneity is expected, which could not be reproduced by the model. The comparison of simulated and measured head showed that especially in the vicinity of river Limmat model results were improved by a temperature dependent leakage coefficient. Residuals were reduced up to 30% compared to isothermal leakage coefficients. This holds particularly for regions, where the river stage is considerably above the groundwater level. Furthermore additional analysis confirmed prior findings, that seepage rates during flood events cannot be reproduced with the implemented linear leakage-concept. Infiltration during flood events is larger than expected, which can be potentially attributed to additional infiltration areas. It is concluded that the temperature dependent leakage concept improves the model results for this study area significantly, and that we expect that this is also for other areas the case.

  9. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  10. Avalanches, loading and finite size effects in 2D amorphous plasticity: results from a finite element model

    NASA Astrophysics Data System (ADS)

    Sandfeld, Stefan; Budrikis, Zoe; Zapperi, Stefano; Fernandez Castellanos, David

    2015-02-01

    Crystalline plasticity is strongly interlinked with dislocation mechanics and nowadays is relatively well understood. Concepts and physical models of plastic deformation in amorphous materials on the other hand—where the concept of linear lattice defects is not applicable—still are lagging behind. We introduce an eigenstrain-based finite element lattice model for simulations of shear band formation and strain avalanches. Our model allows us to study the influence of surfaces and finite size effects on the statistics of avalanches. We find that even with relatively complex loading conditions and open boundary conditions, critical exponents describing avalanche statistics are unchanged, which validates the use of simpler scalar lattice-based models to study these phenomena.

  11. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints.

    PubMed

    Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan

    2017-04-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.

  12. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    PubMed Central

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  13. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  14. Simulation of Nonisothermal Consolidation of Saturated Soils Based on a Thermodynamic Model

    PubMed Central

    Cheng, Xiaohui

    2013-01-01

    Based on the nonequilibrium thermodynamics, a thermo-hydro-mechanical coupling model for saturated soils is established, including a constitutive model without such concepts as yield surface and flow rule. An elastic potential energy density function is defined to derive a hyperelastic relation among the effective stress, the elastic strain, and the dry density. The classical linear non-equilibrium thermodynamic theory is employed to quantitatively describe the unrecoverable energy processes like the nonelastic deformation development in materials by the concepts of dissipative force and dissipative flow. In particular the granular fluctuation, which represents the kinetic energy fluctuation and elastic potential energy fluctuation at particulate scale caused by the irregular mutual movement between particles, is introduced in the model and described by the concept of granular entropy. Using this model, the nonisothermal consolidation of saturated clays under cyclic thermal loadings is simulated in this paper to validate the model. The results show that the nonisothermal consolidation is heavily OCR dependent and unrecoverable. PMID:23983623

  15. Simulation of nonisothermal consolidation of saturated soils based on a thermodynamic model.

    PubMed

    Zhang, Zhichao; Cheng, Xiaohui

    2013-01-01

    Based on the nonequilibrium thermodynamics, a thermo-hydro-mechanical coupling model for saturated soils is established, including a constitutive model without such concepts as yield surface and flow rule. An elastic potential energy density function is defined to derive a hyperelastic relation among the effective stress, the elastic strain, and the dry density. The classical linear non-equilibrium thermodynamic theory is employed to quantitatively describe the unrecoverable energy processes like the nonelastic deformation development in materials by the concepts of dissipative force and dissipative flow. In particular the granular fluctuation, which represents the kinetic energy fluctuation and elastic potential energy fluctuation at particulate scale caused by the irregular mutual movement between particles, is introduced in the model and described by the concept of granular entropy. Using this model, the nonisothermal consolidation of saturated clays under cyclic thermal loadings is simulated in this paper to validate the model. The results show that the nonisothermal consolidation is heavily OCR dependent and unrecoverable.

  16. The Role of Proof in Comprehending and Teaching Elementary Linear Algebra.

    ERIC Educational Resources Information Center

    Uhlig, Frank

    2002-01-01

    Describes how elementary linear algebra can be taught successfully while introducing students to the concept and practice of mathematical proof. Suggests exploring the concept of solvability of linear systems first via the row echelon form (REF). (Author/KHR)

  17. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  18. Modeling Reader and Text Interactions during Narrative Comprehension: A Test of the Lexical Quality Hypothesis

    ERIC Educational Resources Information Center

    Hamilton, Stephen T.; Freed, Erin M.; Long, Debra L.

    2013-01-01

    The goal of this study was to examine predictions derived from the Lexical Quality Hypothesis regarding relations among word decoding, working-memory capacity, and the ability to integrate new concepts into a developing discourse representation. Hierarchical Linear Modeling was used to quantify the effects of three text properties (length,…

  19. Pathways to Co-Impact: Action Research and Community Organising

    ERIC Educational Resources Information Center

    Banks, Sarah; Herrington, Tracey; Carter, Kath

    2017-01-01

    This article introduces the concept of "co-impact" to characterise the complex and dynamic process of social and economic change generated by participatory action research (PAR). It argues that dominant models of research impact tend to see it as a linear process, based on a donor-recipient model, occurring at the end of a project…

  20. Vertical Distribution of Radiation Stress for Non-linear Shoaling Waves

    NASA Astrophysics Data System (ADS)

    Webb, B. M.; Slinn, D. N.

    2004-12-01

    The flux of momentum directed shoreward by an incident wave field, commonly referred to as the radiation stress, plays a significant role in nearshore circulation and, therefore, has a profound impact on the transport of pollutants, biota, and sediment in nearshore systems. Having received much attention since the seminal work of Longuet-Higgins and Stewart in the early 1960's, use of the radiation stress concept continues to be refined and evidence of its utility is widespread in literature pertaining to coastal and ocean science. A number of investigations, both numerical and analytical in nature, have used the concept of the radiation stress to derive appropriate forcing mechanisms that initiate cross-shore and longshore circulation, but typically in a depth-averaged sense due to a lack of information concerning the vertical distribution of the wave stresses. While depth-averaged nearshore circulation models are still widely used today, advancements in technology have permitted the adaptation of three-dimensional (3D) modeling techniques to study flow properties of complex nearshore circulation systems. It has been shown that the resulting circulation in these 3D models is very sensitive to the vertical distribution of the nearshore forcing, which have often been implemented as either depth-uniform or depth-linear distributions. Recently, analytical expressions describing the vertical structure of radiation stress components have appeared in the literature (see Mellor, 2003; Xia et al., 2004) but do not fully describe the magnitude and structure in the region bound by the trough and crest of non-linear, propagating waves. Utilizing a three-dimensional, non-linear, numerical model that resolves the time-dependent free surface, we present mean flow properties resulting from a simulation of Visser's (1984, 1991) laboratory experiment on uniform longshore currents. More specifically, we provide information regarding the vertical distribution of radiation stress components (Sxx and Sxy) resulting from obliquely incident, non-linear shoaling waves. Vertical profiles of the radiation stress components predicted by the numerical model are compared with published analytical solutions, expressions given by linear theory, and observations from an investigation employing second-order cnoidal wave theory.

  1. Detector Outline Document for the Fourth Concept Detector ("4th") at the International Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbareschi, Daniele; et al.

    We describe a general purpose detector ( "Fourth Concept") at the International Linear Collider (ILC) that can measure with high precision all the fundamental fermions and bosons of the standard model, and thereby access all known physics processes. The 4th concept consists of four basic subsystems: a pixel vertex detector for high precision vertex definitions, impact parameter tagging and near-beam occupancy reduction; a Time Projection Chamber for robust pattern recognition augmented with three high-precision pad rows for precision momentum measurement; a high precision multiple-readout fiber calorimeter, complemented with an EM dual-readout crystal calorimeter, for the energy measurement of hadrons, jets,more » electrons, photons, missing momentum, and the tagging of muons; and, an iron-free dual-solenoid muon system for the inverse direction bending of muons in a gas volume to achieve high acceptance and good muon momentum resolution. The pixel vertex chamber, TPC and calorimeter are inside the solenoidal magnetic field. All four subsytems separately achieve the important scientific goal to be 2-to-10 times better than the already excellent LEP detectors, ALEPH, DELPHI, L3 and OPAL. All four basic subsystems contribute to the identification of standard model partons, some in unique ways, such that consequent physics studies are cogent. As an integrated detector concept, we achieve comprehensive physics capabilities that puts all conceivable physics at the ILC within reach.« less

  2. Exact solution of a linear molecular motor model driven by two-step fluctuations and subject to protein friction.

    PubMed

    Fogedby, Hans C; Metzler, Ralf; Svane, Axel

    2004-08-01

    We investigate by analytical means the stochastic equations of motion of a linear molecular motor model based on the concept of protein friction. Solving the coupled Langevin equations originally proposed by Mogilner et al. [Phys. Lett. A 237, 297 (1998)], and averaging over both the two-step internal conformational fluctuations and the thermal noise, we present explicit, analytical expressions for the average motion and the velocity-force relationship. Our results allow for a direct interpretation of details of this motor model which are not readily accessible from numerical solutions. In particular, we find that the model is able to predict physiologically reasonable values for the load-free motor velocity and the motor mobility.

  3. Quantification of spore resistance for assessment and optimization of heating processes: a never-ending story.

    PubMed

    Mafart, P; Leguérinel, I; Couvert, O; Coroller, L

    2010-08-01

    The assessment and optimization of food heating processes require knowledge of the thermal resistance of target spores. Although the concept of spore resistance may seem simple, the establishment of a reliable quantification system for characterizing the heat resistance of spores has proven far more complex than imagined by early researchers. This paper points out the main difficulties encountered by reviewing the historical works on the subject. During an early period, the concept of individual spore resistance had not yet been considered and the resistance of a strain of spore-forming bacterium was related to a global population regarded as alive or dead. A second period was opened by the introduction of the well-known D parameter (decimal reduction time) associated with the previously introduced z-concept. The present period has introduced three new sources of complexity: consideration of non log-linear survival curves, consideration of environmental factors other than temperature, and awareness of the variability of resistance parameters. The occurrence of non log-linear survival curves makes spore resistance dependent on heating time. Consequently, spore resistance characterisation requires at least two parameters. While early resistance models took only heating temperature into account, new models consider other environmental factors such as pH and water activity ("horizontal extension"). Similarly the new generation of models also considers certain environmental factors of the recovery medium for quantifying "apparent heat resistance" ("vertical extension"). Because the conventional F-value is no longer additive in cases of non log-linear survival curves, the decimal reduction ratio should be preferred for assessing the efficiency of a heating process. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. A comparison of two adaptive multivariate analysis methods (PLSR and ANN) for winter wheat yield forecasting using Landsat-8 OLI images

    NASA Astrophysics Data System (ADS)

    Chen, Pengfei; Jing, Qi

    2017-02-01

    An assumption that the non-linear method is more reasonable than the linear method when canopy reflectance is used to establish the yield prediction model was proposed and tested in this study. For this purpose, partial least squares regression (PLSR) and artificial neural networks (ANN), represented linear and non-linear analysis method, were applied and compared for wheat yield prediction. Multi-period Landsat-8 OLI images were collected at two different wheat growth stages, and a field campaign was conducted to obtain grain yields at selected sampling sites in 2014. The field data were divided into a calibration database and a testing database. Using calibration data, a cross-validation concept was introduced for the PLSR and ANN model construction to prevent over-fitting. All models were tested using the test data. The ANN yield-prediction model produced R2, RMSE and RMSE% values of 0.61, 979 kg ha-1, and 10.38%, respectively, in the testing phase, performing better than the PLSR yield-prediction model, which produced R2, RMSE, and RMSE% values of 0.39, 1211 kg ha-1, and 12.84%, respectively. Non-linear method was suggested as a better method for yield prediction.

  5. The Learning Reconstruction of Particle System and Linear Momentum Conservation in Introductory Physics Course

    NASA Astrophysics Data System (ADS)

    Karim, S.; Saepuzaman, D.; Sriyansyah, S. P.

    2016-08-01

    This study is initiated by low achievement of prospective teachers in understanding concepts in introductory physics course. In this case, a problem has been identified that students cannot develop their thinking skills required for building physics concepts. Therefore, this study will reconstruct a learning process, emphasizing a physics concept building. The outcome will design physics lesson plans for the concepts of particle system as well as linear momentum conservation. A descriptive analysis method will be used in order to investigate the process of learning reconstruction carried out by students. In this process, the students’ conceptual understanding will be evaluated using essay tests for concepts of particle system and linear momentum conservation. The result shows that the learning reconstruction has successfully supported the students’ understanding of physics concept.

  6. Graphical Tools for Linear Structural Equation Modeling

    DTIC Science & Technology

    2014-06-01

    others. 4Kenny and Milan (2011) write, “Identification is perhaps the most difficult concept for SEM researchers to understand. We have seen SEM...model to using typical SEM software to determine model identifia- bility. Kenny and Milan (2011) list the following drawbacks: (i) If poor starting...the well known recursive and null rules (Bollen, 1989) and the regression rule (Kenny and Milan , 2011). A Simple Criterion for Identifying Individual

  7. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  8. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  9. Advancing Blade Concept (ABC) Technology Demonstrator

    DTIC Science & Technology

    1981-04-01

    simulated 40-knot full-scale speed were conducted in Phase 0 on the Princeton dynamic model tract (Reference 7). Forward flight tests to a...laterally and longitudinally but also to control the thrust sharing between the rotors are presented in Figure 28. Phase II Tests : This model test phase...were rigged to the required values. Control system linearity and hysteresis tests were conducted to determine

  10. Power supply and pulsing strategies for the future linear colliders

    NASA Astrophysics Data System (ADS)

    Brogna, A. S.; Göttlicher, P.; Weber, M.

    2012-02-01

    The concept of the power delivery systems of the future linear colliders exploits the pulsed bunch structure of the beam in order to minimize the average current in the cables and the electronics and thus to reduce the material budget and heat dissipation. Although modern integrated circuit technologies are already available to design a low-power system, the concepts on how to pulse the front-end electronics and further reduce the power are not yet well understood. We propose a possible implementation of a power pulsing system based on a DC/DC converter and we choose the Analog Hadron Calorimeter as a specific example. The model features large switching currents of electronic modules in short time intervals to stimulate the inductive components along the cables and interconnections.

  11. Quasi-integrability in the modified defocusing non-linear Schrödinger model and dark solitons

    NASA Astrophysics Data System (ADS)

    Blas, H.; Zambrano, M.

    2016-03-01

    The concept of quasi-integrability has been examined in the context of deformations of the defocusing non-linear Schrödinger model (NLS). Our results show that the quasi-integrability concept, recently discussed in the context of deformations of the sine-Gordon, Bullough-Dodd and focusing NLS models, holds for the modified defocusing NLS model with dark soliton solutions and it exhibits the new feature of an infinite sequence of alternating conserved and asymptotically conserved charges. For the special case of two dark soliton solutions, where the field components are eigenstates of a space-reflection symmetry, the first four and the sequence of even order charges are exactly conserved in the scattering process of the solitons. Such results are obtained through analytical and numerical methods, and employ adaptations of algebraic techniques used in integrable field theories. We perform extensive numerical simulations and consider the scattering of dark solitons for the cubic-quintic NLS model with potential V=η {I}^2-in /6{I}^3 and the saturable type potential satisfying [InlineEquation not available: see fulltext.], with a deformation parameter ɛ ∈ [InlineMediaObject not available: see fulltext.] and I = | ψ|2. The issue of the renormalization of the charges and anomalies, and their (quasi)conservation laws are properly addressed. The saturable NLS supports elastic scattering of two soliton solutions for a wide range of values of { η, ɛ, q}. Our results may find potential applications in several areas of non-linear science, such as the Bose-Einstein condensation.

  12. Assessment of Student Memo Assignments in Management Science

    ERIC Educational Resources Information Center

    Williams, Julie Ann Stuart; Stanny, Claudia J.; Reid, Randall C.; Hill, Christopher J.; Rosa, Katie Martin

    2015-01-01

    Frequently in Management Science courses, instructors focus primarily on teaching students the mathematics of linear programming models. However, the ability to discuss mathematical expressions in business terms is an important professional skill. The authors present an analysis of student abilities to discuss management science concepts through…

  13. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  14. Software requirements specification for the GIS-T/ISTEA pooled fund study phase C linear referencing engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amai, W.; Espinoza, J. Jr.; Fletcher, D.R.

    1997-06-01

    This Software Requirements Specification (SRS) describes the features to be provided by the software for the GIS-T/ISTEA Pooled Fund Study Phase C Linear Referencing Engine project. This document conforms to the recommendations of IEEE Standard 830-1984, IEEE Guide to Software Requirements Specification (Institute of Electrical and Electronics Engineers, Inc., 1984). The software specified in this SRS is a proof-of-concept implementation of the Linear Referencing Engine as described in the GIS-T/ISTEA pooled Fund Study Phase B Summary, specifically Sheet 13 of the Phase B object model. The software allows an operator to convert between two linear referencing methods and a datummore » network.« less

  15. Shadows constructing a relationship between light and color pigments by physical and mathematical perspectives

    NASA Astrophysics Data System (ADS)

    Yurumezoglu, Kemal; Karabey, Burak; Yigit Koyunkaya, Melike

    2017-03-01

    Full shadows, partial shadows and multilayer shadows are explained based on the phenomenon of the linear dispersion of light. This paper focuses on progressing the understanding of shadows from physical and mathematical perspectives. A significant relationship between light and color pigments is demonstrated with the help of the concept of sets. This integration of physical and mathematical reasoning not only manages an operational approach to the concept of shadows, it also outputs a model that can be used in science, technology, engineering and mathematics (STEM) curricula by providing a concrete and physical example for abstract concept of the empty set.

  16. Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.

    PubMed

    Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard

    2014-03-01

    Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Chaotic examination

    NASA Astrophysics Data System (ADS)

    Bildirici, Melike; Sonustun, Fulya Ozaksoy; Sonustun, Bahri

    2018-01-01

    In the regards of chaos theory, new concepts such as complexity, determinism, quantum mechanics, relativity, multiple equilibrium, complexity, (continuously) instability, nonlinearity, heterogeneous agents, irregularity were widely questioned in economics. It is noticed that linear models are insufficient for analyzing unpredictable, irregular and noncyclical oscillations of economies, and for predicting bubbles, financial crisis, business cycles in financial markets. Therefore, economists gave great consequence to use appropriate tools for modelling non-linear dynamical structures and chaotic behaviors of the economies especially in macro and the financial economy. In this paper, we aim to model the chaotic structure of exchange rates (USD-TL and EUR-TL). To determine non-linear patterns of the selected time series, daily returns of the exchange rates were tested by BDS during the period from January 01, 2002 to May 11, 2017 which covers after the era of the 2001 financial crisis. After specifying the non-linear structure of the selected time series, it was aimed to examine the chaotic characteristic for the selected time period by Lyapunov Exponents. The findings verify the existence of the chaotic structure of the exchange rate returns in the analyzed time period.

  18. Predicting the stage shift as a result of breast cancer screening in low- and middle-income countries: a proof of concept.

    PubMed

    Zelle, Sten G; Baltussen, Rob; Otten, Johannes D M; Heijnsdijk, Eveline A M; van Schoor, Guido; Broeders, Mireille J M

    2015-03-01

    To provide proof of concept for a simple model to estimate the stage shift as a result of breast cancer screening in low- and middle-income countries (LMICs). Stage shift is an essential early detection indicator and an important proxy for the performance and possible further impact of screening programmes. Our model could help LIMCs to choose appropriate control strategies. We assessed our model concept in three steps. First, we calculated the proportional performance rates (i.e. index number Z) based on 16 screening rounds of the Nijmegen Screening Program (384,884 screened women). Second, we used linear regression to assess the association between Z and the amount of stage shift observed in the programme. Third, we hypothesized how Z could be used to estimate the stage shift as a result of breast cancer screening in LMICs. Stage shifts can be estimated by the proportional performance rates (Zs) using linear regression. Zs calculated for each screening round are highly associated with the observed stage shifts in the Nijmegen Screening Program (Pearson's R: 0.798, R square: 0.637). Our model can predict the stage shifts in the Nijmegen Screening Program, and could be applied to settings with different characteristics, although it should not be straightforwardly used to estimate the impact on mortality. Further research should investigate the extrapolation of our model to other settings. As stage shift is an essential screening performance indicator, our model could provide important information on the performance of breast cancer screening programmes that LMICs consider implementing. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  19. Application of linear logic to simulation

    NASA Astrophysics Data System (ADS)

    Clarke, Thomas L.

    1998-08-01

    Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.

  20. A Method for Using Adjacency Matrices to Analyze the Connections Students Make within and between Concepts: The Case of Linear Algebra

    ERIC Educational Resources Information Center

    Selinski, Natalie E.; Rasmussen, Chris; Wawro, Megan; Zandieh, Michelle

    2014-01-01

    The central goals of most introductory linear algebra courses are to develop students' proficiency with matrix techniques, to promote their understanding of key concepts, and to increase their ability to make connections between concepts. In this article, we present an innovative method using adjacency matrices to analyze students' interpretation…

  1. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  2. GARLIC: GAmma Reconstruction at a LInear Collider experiment

    NASA Astrophysics Data System (ADS)

    Jeans, D.; Brient, J.-C.; Reinhard, M.

    2012-06-01

    The precise measurement of hadronic jet energy is crucial to maximise the physics reach of a future Linear Collider. An important ingredient required to achieve this is the efficient identification of photons within hadronic showers. One configuration of the ILD detector concept employs a highly granular silicon-tungsten sampling calorimeter to identify and measure photons, and the GARLIC algorithm described in this paper has been developed to identify photons in such a calorimeter. We describe the algorithm and characterise its performance using events fully simulated in a model of the ILD detector.

  3. Detectors for Linear Colliders: Detector design for a Future Electron-Positron Collider (4/4)

    ScienceCinema

    Thomson, Mark

    2018-05-21

    In this lecture I will discuss the issues related to the overall design and optimization of a detector for ILC and CLIC energies. I will concentrate on the two main detector concepts which are being developed in the context of the ILC. Here there has been much recent progress in developing realistic detector models and in understanding the physics performance of the overall detector concept. In addition, I will discuss the how the differences in the detector requirements for the ILC and CLIC impact the overall detector design.

  4. A degree of controllability definition - Fundamental concepts and application to modal systems

    NASA Technical Reports Server (NTRS)

    Viswanathan, C. N.; Longman, R. W.; Likins, P. W.

    1984-01-01

    Starting from basic physical considerations, this paper develops a concept of the degree of controllability of a control system, and then develops numerical methods to generate approximate values of the degree of controllability for any linear time-invariant system. In many problems, such as the control of future, very large, flexible spacecraft and certain chemical process control problems, the question of how to choose the number and locations of the control system actuators is an important one. The results obtained here offer the control system designer a tool which allows him to rank the effectiveness of alternative actuator distributions, and hence to choose the actuator locations on a rational basis. The degree of controllability is shown to take a particularly simple form when the dynamic equations of a satellite are in second-order modal form. The degree of controllability concept has still other fundamental uses - it allows one to study the system structural relations between the various inputs and outputs of a linear system, which has applications to decoupling and model reduction.

  5. A geomorphic process law for detachment-limited hillslopes

    NASA Astrophysics Data System (ADS)

    Turowski, Jens

    2015-04-01

    Geomorphic process laws are used to assess the shape evolution of structures at the Earth's surface over geological time scales, and are routinely used in landscape evolution models. There are two currently available concepts on which process laws for hillslope evolution rely. In the transport-limited concept, the evolution of a hillslope is described by a linear or a non-linear diffusion equation. In contrast, in the threshold slope concept, the hillslope is assumed to collapse to a slope equal to the internal friction angle of the material when the load due to the relief exists the material strength. Many mountains feature bedrock slopes, especially in the high mountains, and material transport along the slope is limited by the erosion of the material from the bedrock. Here, I suggest a process law for detachment-limited or threshold-dominated hillslopes, in which the erosion rate is a function of the applied stress minus the surface stress due to structural loading. The process law leads to the prediction of an equilibrium form that compares well to the shape of many mountain domes.

  6. Time Advice and Learning Questions in Computer Simulations

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2011-01-01

    Students (N = 101) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without time advice) x 3 (with learning questions and corrective feedback, with…

  7. Factor Scores, Structure and Communality Coefficients: A Primer

    ERIC Educational Resources Information Center

    Odum, Mary

    2011-01-01

    (Purpose) The purpose of this paper is to present an easy-to-understand primer on three important concepts of factor analysis: Factor scores, structure coefficients, and communality coefficients. Given that statistical analyses are a part of a global general linear model (GLM), and utilize weights as an integral part of analyses (Thompson, 2006;…

  8. Bicycles, Birds, Bats and Balloons: New Applications for Algebra Classes.

    ERIC Educational Resources Information Center

    Yoshiwara, Bruce; Yoshiwara, Kathy

    This collection of activities is intended to enhance the teaching of college algebra through the use of modeling. The problems use real data and involve the representation and interpretation of the data. The concepts addressed include rates of change, linear and quadratic regression, and functions. The collection consists of eight problems, four…

  9. Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Iatridou, Maria

    2010-01-01

    This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…

  10. Preliminary Analysis of an Oscillating Surge Wave Energy Converter with Controlled Geometry: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan; Lawson, Michael; Yu, Yi-Hsiang

    The aim of this paper is to present a novel wave energy converter device concept that is being developed at the National Renewable Energy Laboratory. The proposed concept combines an oscillating surge wave energy converter with active control surfaces. These active control surfaces allow for the device geometry to be altered, which leads to changes in the hydrodynamic properties. The device geometry will be controlled on a sea state time scale and combined with wave-to-wave power-take-off control to maximize power capture, increase capacity factor, and reduce design loads. The paper begins with a traditional linear frequency domain analysis of themore » device performance. Performance sensitivity to foil pitch angle, the number of activated foils, and foil cross section geometry is presented to illustrate the current design decisions; however, it is understood from previous studies that modeling of current oscillating wave energy converter designs requires the consideration of nonlinear hydrodynamics and viscous drag forces. In response, a nonlinear model is presented that highlights the shortcomings of the linear frequency domain analysis and increases the precision in predicted performance.« less

  11. Interrelation of creep and relaxation: a modeling approach for ligaments.

    PubMed

    Lakes, R S; Vanderby, R

    1999-12-01

    Experimental data (Thornton et al., 1997) show that relaxation proceeds more rapidly (a greater slope on a log-log scale) than creep in ligament, a fact not explained by linear viscoelasticity. An interrelation between creep and relaxation is therefore developed for ligaments based on a single-integral nonlinear superposition model. This interrelation differs from the convolution relation obtained by Laplace transforms for linear materials. We demonstrate via continuum concepts of nonlinear viscoelasticity that such a difference in rate between creep and relaxation phenomenologically occurs when the nonlinearity is of a strain-stiffening type, i.e., the stress-strain curve is concave up as observed in ligament. We also show that it is inconsistent to assume a Fung-type constitutive law (Fung, 1972) for both creep and relaxation. Using the published data of Thornton et al. (1997), the nonlinear interrelation developed herein predicts creep behavior from relaxation data well (R > or = 0.998). Although data are limited and the causal mechanisms associated with viscoelastic tissue behavior are complex, continuum concepts demonstrated here appear capable of interrelating creep and relaxation with fidelity.

  12. A nonlinear model for gas chromatograph systems

    NASA Technical Reports Server (NTRS)

    Feinberg, M. P.

    1975-01-01

    Fundamental engineering design techniques and concepts were studied for the optimization of a gas chromatograph-mass spectrometer chemical analysis system suitable for use on an unmanned, Martian roving vehicle. Previously developed mathematical models of the gas chromatograph are found to be inadequate for predicting peak heights and spreading for some experimental conditions and chemical systems. A modification to the existing equilibrium adsorption model is required; the Langmuir isotherm replaces the linear isotherm. The numerical technique of Crank-Nicolson was studied for use with the linear isotherm to determine the utility of the method. Modifications are made to the method eliminate unnecessary calculations which result in an overall reduction of the computation time of about 42 percent. The Langmuir isotherm is considered which takes into account the composition-dependent effects on the thermodynamic parameter, mRo.

  13. ISPAN (Interactive Stiffened Panel Analysis): A tool for quick concept evaluation and design trade studies

    NASA Technical Reports Server (NTRS)

    Hairr, John W.; Dorris, William J.; Ingram, J. Edward; Shah, Bharat M.

    1993-01-01

    Interactive Stiffened Panel Analysis (ISPAN) modules, written in FORTRAN, were developed to provide an easy to use tool for creating finite element models of composite material stiffened panels. The modules allow the user to interactively construct, solve and post-process finite element models of four general types of structural panel configurations using only the panel dimensions and properties as input data. Linear, buckling and post-buckling solution capability is provided. This interactive input allows rapid model generation and solution by non finite element users. The results of a parametric study of a blade stiffened panel are presented to demonstrate the usefulness of the ISPAN modules. Also, a non-linear analysis of a test panel was conducted and the results compared to measured data and previous correlation analysis.

  14. Modelling Dominance Hierarchies Under Winner and Loser Effects.

    PubMed

    Kura, Klodeta; Broom, Mark; Kandler, Anne

    2015-06-01

    Animals that live in groups commonly form themselves into dominance hierarchies which are used to allocate important resources such as access to mating opportunities and food. In this paper, we develop a model of dominance hierarchy formation based upon the concept of winner and loser effects using a simulation-based model and consider the linearity of our hierarchy using existing and new statistical measures. Two models are analysed: when each individual in a group does not know the real ability of their opponents to win a fight and when they can estimate their opponents' ability every time they fight. This estimation may be accurate or fall within an error bound. For both models, we investigate if we can achieve hierarchy linearity, and if so, when it is established. We are particularly interested in the question of how many fights are necessary to establish a dominance hierarchy.

  15. Non-linear controls influence functions in an aircraft dynamics simulator

    NASA Technical Reports Server (NTRS)

    Guerreiro, Nelson M.; Hubbard, James E., Jr.; Motter, Mark A.

    2006-01-01

    In the development and testing of novel structural and controls concepts, such as morphing aircraft wings, appropriate models are needed for proper system characterization. In most instances, available system models do not provide the required additional degrees of freedom for morphing structures but may be modified to some extent to achieve a compatible system. The objective of this study is to apply wind tunnel data collected for an Unmanned Air Vehicle (UAV), that implements trailing edge morphing, to create a non-linear dynamics simulator, using well defined rigid body equations of motion, where the aircraft stability derivatives change with control deflection. An analysis of this wind tunnel data, using data extraction algorithms, was performed to determine the reference aerodynamic force and moment coefficients for the aircraft. Further, non-linear influence functions were obtained for each of the aircraft s control surfaces, including the sixteen trailing edge flap segments. These non-linear controls influence functions are applied to the aircraft dynamics to produce deflection-dependent aircraft stability derivatives in a non-linear dynamics simulator. Time domain analysis of the aircraft motion, trajectory, and state histories can be performed using these nonlinear dynamics and may be visualized using a 3-dimensional aircraft model. Linear system models can be extracted to facilitate frequency domain analysis of the system and for control law development. The results of this study are useful in similar projects where trailing edge morphing is employed and will be instrumental in the University of Maryland s continuing study of active wing load control.

  16. Nonlinear wave chaos: statistics of second harmonic fields.

    PubMed

    Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M

    2017-10-01

    Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.

  17. Some Issues about the Introduction of First Concepts in Linear Algebra during Tutorial Sessions at the Beginning of University

    ERIC Educational Resources Information Center

    Grenier-Boley, Nicolas

    2014-01-01

    Certain mathematical concepts were not introduced to solve a specific open problem but rather to solve different problems with the same tools in an economic formal way or to unify several approaches: such concepts, as some of those of linear algebra, are presumably difficult to introduce to students as they are potentially interwoven with many…

  18. Currency arbitrage detection using a binary integer programming model

    NASA Astrophysics Data System (ADS)

    Soon, Wanmei; Ye, Heng-Qing

    2011-04-01

    In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this work, students can learn to link several types of basic optimization models, namely linear programming, integer programming and network models, and apply the well-known sensitivity analysis procedure to accommodate realistic changes in the exchange rates. Beginning with a BIP model, we discuss how it can be reduced to an equivalent but considerably simpler model, where an efficient algorithm can be applied to find the arbitrages and incorporate the sensitivity analysis procedure. A simple comparison is then made with a different arbitrage detection model. This exercise helps students learn to apply basic Operations Research concepts to a practical real-life example, and provides insights into the processes involved in Operations Research model formulations.

  19. SEACAS Theory Manuals: Part 1. Problem Formulation in Nonlinear Solid Mechancis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.

    1998-08-01

    This report gives an introduction to the basic concepts and principles involved in the formulation of nonlinear problems in solid mechanics. By way of motivation, the discussion begins with a survey of some of the important sources of nonlinearity in solid mechanics applications, using wherever possible simple one dimensional idealizations to demonstrate the physical concepts. This discussion is then generalized by presenting generic statements of initial/boundary value problems in solid mechanics, using linear elasticity as a template and encompassing such ideas as strong and weak forms of boundary value problems, boundary and initial conditions, and dynamic and quasistatic idealizations. Themore » notational framework used for the linearized problem is then extended to account for finite deformation of possibly inelastic solids, providing the context for the descriptions of nonlinear continuum mechanics, constitutive modeling, and finite element technology given in three companion reports.« less

  20. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  1. Advanced Nacelle Acoustic Lining Concepts Development

    NASA Technical Reports Server (NTRS)

    Bielak, G.; Gallman, J.; Kunze, R.; Murray, P.; Premo, J.; Kosanchick, M.; Hersh, A.; Celano, J.; Walker, B.; Yu, J.; hide

    2002-01-01

    The work reported in this document consisted of six distinct liner technology development subtasks: 1) Analysis of Model Scale ADP Fan Duct Lining Data (Boeing): An evaluation of an AST Milestone experiment to demonstrate 1995 liner technology superiority relative to that of 1992 was performed on 1:5.9 scale model fan rig (Advanced Ducted Propeller) test data acquired in the NASA Glenn 9 x 15 foot wind tunnel. The goal of 50% improvement was deemed satisfied. 2) Bias Flow Liner Investigation (Boeing, VCES): The ability to control liner impedance by low velocity bias flow through liner was demonstrated. An impedance prediction model to include bias flow was developed. 3) Grazing Flow Impedance Testing (Boeing): Grazing flow impedance tests were conducted for comparison with results achieved at four different laboratories. 4) Micro-Perforate Acoustic Liner Technology (BFG, HAE, NG): Proof of concept testing of a "linear liner." 5) Extended Reaction Liners (Boeing, NG): Bandwidth improvements for non-locally reacting liner were investigated with porous honeycomb core test liners. 6) Development of a Hybrid Active/Passive Lining Concept (HAE): Synergism between active and passive attenuation of noise radiated by a model inlet was demonstrated.

  2. Design and analysis of an unconventional permanent magnet linear machine for energy harvesting

    NASA Astrophysics Data System (ADS)

    Zeng, Peng

    This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof-of-concept unconventional permanent magnet (PM) linear generator is prototyped and tested to verify the simulation results of the FEA model. For the coil windings of 33, 66 and 165 turns, the output power of the machine is tested to have the output power of 65.6 mW, 189.1 mW, and 497.7 mW respectively with the maximum power density of 2.486 mW/cm3.

  3. A methodology for design of a linear referencing system for surface transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonderohe, A.; Hepworth, T.

    1997-06-01

    The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less

  4. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  5. The Effect of Using Concept Maps in Elementary Linear Algebra Course on Students’ Learning

    NASA Astrophysics Data System (ADS)

    Syarifuddin, H.

    2018-04-01

    This paper presents the results of a classroom action research that was done in Elementary Linear Algebra course at Universitas Negeri Padang. The focus of the research want to see the effect of using concept maps in the course on students’ learning. Data in this study were collected through classroom observation, students’ reflective journal and concept maps that were created by students. The result of the study was the using of concept maps in Elementary Linera Algebra course gave positive effect on students’ learning.

  6. Heuristics for Understanding the Concepts of Interaction, Polynomial Trend, and the General Linear Model.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…

  7. Instructional Advice, Time Advice and Learning Questions in Computer Simulations

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel

    2010-01-01

    Undergraduate students (N = 97) used an introductory text and a computer simulation to learn fundamental concepts about statistical analyses (e.g., analysis of variance, regression analysis and General Linear Model). Each learner was randomly assigned to one cell of a 2 (with or without instructional advice) x 2 (with or without time advice) x 2…

  8. Direction-Dependence Analysis: A Confirmatory Approach for Testing Directional Theories

    ERIC Educational Resources Information Center

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    The concept of direction dependence has attracted growing attention due to its potential to help decide which of two competing linear regression models (X ? Y or Y ? X) is more likely to reflect the correct causal flow. Several tests have been proposed to evaluate hypotheses compatible with direction dependence. In this issue, Thoemmes (2015)…

  9. Teaching Tool for a Control Systems Laboratory Using a Quadrotor as a Plant in MATLAB

    ERIC Educational Resources Information Center

    Khan, Subhan; Jaffery, Mujtaba Hussain; Hanif, Athar; Asif, Muhammad Rizwan

    2017-01-01

    This paper presents a MATLAB-based application to teach the guidance, navigation, and control concepts of a quadrotor to undergraduate students, using a graphical user interface (GUI) and 3-D animations. The Simulink quadrotor model is controlled by a proportional integral derivative controller and a linear quadratic regulator controller. The GUI…

  10. HLM in Cluster-Randomised Trials--Measuring Efficacy across Diverse Populations of Learners

    ERIC Educational Resources Information Center

    Hegedus, Stephen; Tapper, John; Dalton, Sara; Sloane, Finbarr

    2013-01-01

    We describe the application of Hierarchical Linear Modelling (HLM) in a cluster-randomised study to examine learning algebraic concepts and procedures in an innovative, technology-rich environment in the US. HLM is applied to measure the impact of such treatment on learning and on contextual variables. We provide a detailed description of such…

  11. Hybrid Deterministic Views about Genes in Biology Textbooks: A Key Problem in Genetics Teaching

    ERIC Educational Resources Information Center

    dos Santos, Vanessa Carvalho; Joaquim, Leyla Mariane; El-Hani, Charbel Nino

    2012-01-01

    A major source of difficulties in promoting students' understanding of genetics lies in the presentation of gene concepts and models in an inconsistent and largely ahistorical manner, merely amalgamated in hybrid views, as if they constituted linear developments, instead of being built for different purposes and employed in specific contexts. In…

  12. Radiation signatures in childhood thyroid cancers after the Chernobyl accident: Possible roles of radiation in carcinogenesis

    PubMed Central

    Suzuki, Keiji; Mitsutake, Norisato; Saenko, Vladimir; Yamashita, Shunichi

    2015-01-01

    After the Tokyo Electric Power Company Fukushima Daiichi nuclear power plant accident, cancer risk from low-dose radiation exposure has been deeply concerning. The linear no-threshold model is applied for the purpose of radiation protection, but it is a model based on the concept that ionizing radiation induces stochastic oncogenic alterations in the target cells. As the elucidation of the mechanism of radiation-induced carcinogenesis is indispensable to justify the concept, studies aimed at the determination of molecular changes associated with thyroid cancers among children who suffered effects from the Chernobyl nuclear accident will be overviewed. We intend to discuss whether any radiation signatures are associated with radiation-induced childhood thyroid cancers. PMID:25483826

  13. Silicon photonics plasma-modulators with advanced transmission line design.

    PubMed

    Merget, Florian; Azadeh, Saeed Sharif; Mueller, Juliana; Shen, Bin; Nezhad, Maziar P; Hauck, Johannes; Witzens, Jeremy

    2013-08-26

    We have investigated two novel concepts for the design of transmission lines in travelling wave Mach-Zehnder interferometer based Silicon Photonics depletion modulators overcoming the analog bandwidth limitations arising from cross-talk between signal lines in push-pull modulators and reducing the linear losses of the transmission lines. We experimentally validate the concepts and demonstrate an E/O -3 dBe bandwidth of 16 GHz with a 4V drive voltage (in dual drive configuration) and 8.8 dB on-chip insertion losses. Significant bandwidth improvements result from suppression of cross-talk. An additional bandwidth enhancement of ~11% results from a reduction of resistive transmission line losses. Frequency dependent loss models for loaded transmission lines and E/O bandwidth modeling are fully verified.

  14. Comparison-based optical study on a point-line-coupling-focus system with linear Fresnel heliostats.

    PubMed

    Dai, Yanjun; Li, Xian; Zhou, Lingyu; Ma, Xuan; Wang, Ruzhu

    2016-05-16

    Concentrating the concept of a beam-down solar tower with linear Fresnel heliostat (PLCF) is one of the feasible choices and has great potential in reducing spot size and improving optical efficiency. Optical characteristics of a PLCF system with the hyperboloid reflector are introduced and investigated theoretically. Taking into account solar position and optical surface errors, a Monte Carlo ray-tracing (MCRT) analysis model for a PLCF system is developed and applied in a comparison-based study on the optical performance between the PLCF system and the conventional beam-down solar tower system with flat and spherical heliostats. The optimal square facet of linear Fresnel heliostat is also proposed for matching with the 3D-CPC receiver.

  15. Dynamic modelling and simulation of linear Fresnel solar field model based on molten salt heat transfer fluid

    NASA Astrophysics Data System (ADS)

    Hakkarainen, Elina; Tähtinen, Matti

    2016-05-01

    Demonstrations of direct steam generation (DSG) in linear Fresnel collectors (LFC) have given promising results related to higher steam parameters compared to the current state-of-the-art parabolic trough collector (PTC) technology using oil as heat transfer fluid (HTF). However, DSG technology lacks feasible solution for long-term thermal energy storage (TES) system. This option is important for CSP technology in order to offer dispatchable power. Recently, molten salts have been proposed to be used as HTF and directly as storage medium in both line-focusing solar fields, offering storage capacity of several hours. This direct molten salt (DMS) storage concept has already gained operational experience in solar tower power plant, and it is under demonstration phase both in the case of LFC and PTC systems. Dynamic simulation programs offer a valuable effort for design and optimization of solar power plants. In this work, APROS dynamic simulation program is used to model a DMS linear Fresnel solar field with two-tank TES system, and example simulation results are presented in order to verify the functionality of the model and capability of APROS for CSP modelling and simulation.

  16. A two-agent model applied to the biological control of the sugarcane borer (Diatraea saccharalis) by the egg parasitoid Trichogramma galloi and the larvae parasitoid Cotesia flavipes.

    PubMed

    Molnár, Sándor; López, Inmaculada; Gámez, Manuel; Garay, József

    2016-03-01

    The paper is aimed at a methodological development in biological pest control. The considered one pest two-agent system is modelled as a verticum-type system. Originally, linear verticum-type systems were introduced by one of the authors for modelling certain industrial systems. These systems are hierarchically composed of linear subsystems such that a part of the state variables of each subsystem affect the dynamics of the next subsystem. Recently, verticum-type system models have been applied to population ecology as well, which required the extension of the concept a verticum-type system to the nonlinear case. In the present paper the general concepts and technics of nonlinear verticum-type control systems are used to obtain biological control strategies in a two-agent system. For the illustration of this verticum-type control, these tools of mathematical systems theory are applied to a dynamic model of interactions between the egg and larvae populations of the sugarcane borer (Diatraea saccharalis) and its parasitoids: the egg parasitoid Trichogramma galloi and the larvae parasitoid Cotesia flavipes. In this application a key role is played by the concept of controllability, which means that it is possible to steer the system to an equilibrium in given time. In addition to a usual linearization, the basic idea is a decomposition of the control of the whole system into the control of the subsystems, making use of the verticum structure of the population system. The main aim of this study is to show several advantages of the verticum (or decomposition) approach over the classical control theoretical model (without decomposition). For example, in the case of verticum control the pest larval density decreases below the critical threshold value much quicker than without decomposition. Furthermore, it is also shown that the verticum approach may be better even in terms of cost effectiveness. The presented optimal control methodology also turned out to be an efficient tool for the "in silico" analysis of the cost-effectiveness of different biocontrol strategies, e.g. by answering the question how far it is cost-effective to speed up the reduction of the pest larvae density, or along which trajectory this reduction should be carried out. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Linear optics measurements and corrections using an AC dipole in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G.; Bai, M.; Yang, L.

    2010-05-23

    We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.

  18. Nonlinear discrete-time multirate adaptive control of non-linear vibrations of smart beams

    NASA Astrophysics Data System (ADS)

    Georgiou, Georgios; Foutsitzi, Georgia A.; Stavroulakis, Georgios E.

    2018-06-01

    The nonlinear adaptive digital control of a smart piezoelectric beam is considered. It is shown that in the case of a sampled-data context, a multirate control strategy provides an appropriate framework in order to achieve vibration regulation, ensuring the stability of the whole control system. Under parametric uncertainties in the model parameters (damping ratios, frequencies, levels of non linearities and cross coupling, control input parameters), the scheme is completed with an adaptation law deduced from hyperstability concepts. This results in the asymptotic satisfaction of the control objectives at the sampling instants. Simulation results are presented.

  19. Recent developments in learning control and system identification for robots and structures

    NASA Technical Reports Server (NTRS)

    Phan, M.; Juang, J.-N.; Longman, R. W.

    1990-01-01

    This paper reviews recent results in learning control and learning system identification, with particular emphasis on discrete-time formulation, and their relation to adaptive theory. Related continuous-time results are also discussed. Among the topics presented are proportional, derivative, and integral learning controllers, time-domain formulation of discrete learning algorithms. Newly developed techniques are described including the concept of the repetition domain, and the repetition domain formulation of learning control by linear feedback, model reference learning control, indirect learning control with parameter estimation, as well as related basic concepts, recursive and non-recursive methods for learning identification.

  20. Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2017-04-01

    When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.

  1. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

    PubMed

    Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

    2018-02-05

    To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

  2. Motivating the Concept of Eigenvectors via Cryptography

    ERIC Educational Resources Information Center

    Siap, Irfan

    2008-01-01

    New methods of teaching linear algebra in the undergraduate curriculum have attracted much interest lately. Most of this work is focused on evaluating and discussing the integration of special computer software into the Linear Algebra curriculum. In this article, I discuss my approach on introducing the concept of eigenvectors and eigenvalues,…

  3. Using Technology to Facilitate Reasoning: Lifting the Fog from Linear Algebra

    ERIC Educational Resources Information Center

    Berry, John S.; Lapp, Douglas A.; Nyman, Melvin A.

    2008-01-01

    This article discusses student difficulties in grasping concepts from linear algebra. Using an example from an interview with a student, we propose changes that might positively impact student understanding of concepts within a problem-solving context. In particular, we illustrate barriers to student understanding and suggest technological…

  4. Proceedings of the International Cryocooler Conference (7th) Held in Santa Fe, New Mexico on 17-19 November 1992. Part 2,

    DTIC Science & Technology

    1993-04-01

    presentations. The topics included Cryoccoler Testing and Modeling , Space and Long Life Applications, Stirling Cryocoolers , Pulse Tube Refrigerators, Novel...Equation (12), derived in the present study can also be used to develop a linear network model of Stirling 1" or pulse - tube cryocoolers by...Applications, Stirling Cryocoolers , Pulse Tube Refrigerators, Novel Concepts and Component Development, Low Temperature Regenerator Development, and J-T and

  5. Seldon v.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina; Ko, Teresa; Shneider, Max

    Seldon is an agent-based social simulation framework that uniquely integrates concepts from a variety of different research areas including psychology, social science, and agent-based modeling. Development has been taking place for a number of years, previously focusing on gang and terrorist recruitment. The toolkit consists of simple agents (individuals) and abstract agents (groups of individuals representing social/institutional concepts) that interact according to exchangeable rule sets (i.e. linear attraction, linear reinforcement). Each agent has a set of customizable attributes that get modified during the interactions. Interactions create relationships between agents, and each agent has a maximum amount of relationship energy thatmore » it can expend. As relationships evolve, they form multiple levels of social networks (i.e. acquaintances, friends, cliques) that in turn drive future interactions. Agents can also interact randomly if they are not connected through a network, mimicking the chance interactions that real people have in everyday life. We are currently integrating Seldon with the cognitive framework (also developed at Sandia). Each individual agent has a lightweight cognitive model that is created automatically from textual sources. Cognitive information is exchanged during interactions, and can also be injected into a running simulation. The entire framework has been parallelized to allow for larger simulations in an HPC environment. We have also added more detail to the agents themselves (a"Big Five" personality model) and their interactions (an enhanced relationship model) for a more realistic representation.« less

  6. Equicontrollability and the model following problem

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1971-01-01

    Equicontrollability and its application to the linear time-invariant model-following problem are discussed. The problem is presented in the form of two systems, the plant and the model. The requirement is to find a controller to apply to the plant so that the resultant compensated plant behaves, in an input-output sense, the same as the model. All systems are assumed to be linear and time-invariant. The basic approach is to find suitable equicontrollable realizations of the plant and model and to utilize feedback so as to produce a controller of minimal state dimension. The concept of equicontrollability is a generalization of control canonical (phase variable) form applied to multivariable systems. It allows one to visualize clearly the effects of feedback and to pinpoint the parameters of a multivariable system which are invariant under feedback. The basic contributions are the development of equicontrollable form; solution of the model-following problem in an entirely algorithmic way, suitable for computer programming; and resolution of questions on system decoupling.

  7. Using state variables to model the response of tumour cells to radiation and heat: a novel multi-hit-repair approach.

    PubMed

    Scheidegger, Stephan; Fuchs, Hans U; Zaugg, Kathrin; Bodis, Stephan; Füchslin, Rudolf M

    2013-01-01

    In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.

  8. Incorporation of SemiSpan SuperSonic Transport (S4T) Aeroservoelastic Models into SAREC-ASV Simulation

    NASA Technical Reports Server (NTRS)

    Christhilf, David M.; Pototzky, Anthony S.; Stevens, William L.

    2010-01-01

    The Simulink-based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) was modified to incorporate linear models representing aeroservoelastic characteristics of the SemiSpan SuperSonic Transport (S4T) wind-tunnel model. The S4T planform is for a Technology Concept Aircraft (TCA) design from the 1990s. The model has three control surfaces and is instrumented with accelerometers and strain gauges. Control laws developed for wind-tunnel testing for Ride Quality Enhancement, Gust Load Alleviation, and Flutter Suppression System functions were implemented in the simulation. The simulation models open- and closed-loop response to turbulence and to control excitation. It provides time histories for closed-loop stable conditions above the open-loop flutter boundary. The simulation is useful for assessing the potential impact of closed-loop control rate and position saturation. It also provides a means to assess fidelity of system identification procedures by providing time histories for a known plant model, with and without unmeasured turbulence as a disturbance. Sets of linear models representing different Mach number and dynamic pressure conditions were implemented as MATLAB Linear Time Invariant (LTI) objects. Configuration changes were implemented by selecting which LTI object to use in a Simulink template block. A limited comparison of simulation versus wind-tunnel results is shown.

  9. Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.

    NASA Astrophysics Data System (ADS)

    Le, Loc Xuan

    1987-09-01

    A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.

  10. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    PubMed

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  11. A rod type linear ultrasonic motor utilizing longitudinal traveling waves: proof of concept

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Wielert, Tim; Twiefel, Jens; Jin, Jiamei; Wallaschek, Jörg

    2017-08-01

    This paper proposes a non-resonant linear ultrasonic motor utilizing longitudinal traveling waves. The longitudinal traveling waves in the rod type stator are generated by inducing longitudinal vibrations at one end of the waveguide and eliminating reflections at the opposite end by a passive damper. Considering the Poisson’s effect, the stator surface points move on elliptic trajectories and the slider is driven forward by friction. In contrast to many other flexural traveling wave linear ultrasonic motors, the driving direction of the proposed motor is identical to the wave propagation direction. The feasibility of the motor concept is demonstrated theoretically and experimentally. First, the design and operation principle of the motor are presented in detail. Then, the stator is modeled utilizing the transfer matrix method and verified by experimental studies. In addition, experimental parameter studies are carried out to identify the motor characteristics. Finally, the performance of the proposed motor is investigated. Overall, the results indicate very dynamic drive characteristics. The motor prototype achieves a maximum mean velocity of 115 mm s-1 and a maximum load of 0.25 N. Thereby, the start-up and shutdown times from the maximum speed are lower than 5 ms.

  12. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  13. Employment of Gibbs-Donnan-based concepts for interpretation of the properties of linear polyelectrolyte solutions

    USGS Publications Warehouse

    Marinsky, J.A.; Reddy, M.M.

    1991-01-01

    Earlier research has shown that the acid dissociation and metal ion complexation equilibria of linear, weak-acid polyelectrolytes and their cross-linked gel analogues are similarly sensitive to the counterion concentration levels of their solutions. Gibbs-Donnan-based concepts, applicable to the gel, are equally applicable to the linear polyelectrolyte for the accommodation of this sensitivity to ionic strength. This result is presumed to indicate that the linear polyelectrolyte in solution develops counterion-concentrating regions that closely resemble the gel phase of their analogues. Advantage has been taken of this description of linear polyelectrolytes to estimate the solvent uptake by these regions. ?? 1991 American Chemical Society.

  14. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  15. Hybrid Airships: Intratheater Operations Cost-Benefit Analysis

    DTIC Science & Technology

    2011-06-01

    airships (HA) in an intratheater humanitarian assistance scenario. A linear programming model was used to study various mixes of hybrid airships...i.e. 200 tons) must be moved as quickly as possible (or during sealift transit), then HA operating costs of $3,000 per hour or less also make them a ...for Future Research ................................................................................ 52 Appendix A : Examples of HA Concepts/Prototypes

  16. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.

  17. Using Cognitive Tutor Software in Learning Linear Algebra Word Concept

    ERIC Educational Resources Information Center

    Yang, Kai-Ju

    2015-01-01

    This paper reports on a study of twelve 10th grade students using Cognitive Tutor, a math software program, to learn linear algebra word concept. The study's purpose was to examine whether students' mathematics performance as it is related to using Cognitive Tutor provided evidence to support Koedlinger's (2002) four instructional principles used…

  18. Definitions Are Important: The Case of Linear Algebra

    ERIC Educational Resources Information Center

    Berman, Abraham; Shvartsman, Ludmila

    2016-01-01

    In this paper we describe an experiment in a linear algebra course. The aim of the experiment was to promote the students' understanding of the studied concepts focusing on their definitions. It seems to be a given that students should understand concepts' definitions before working substantially with them. Unfortunately, in many cases they do…

  19. Teaching the "Diagonalization Concept" in Linear Algebra with Technology: A Case Study at Galatasaray University

    ERIC Educational Resources Information Center

    Yildiz Ulus, Aysegul

    2013-01-01

    This paper examines experimental and algorithmic contributions of advanced calculators (graphing and computer algebra system, CAS) in teaching the concept of "diagonalization," one of the key topics in Linear Algebra courses taught at the undergraduate level. Specifically, the proposed hypothesis of this study is to assess the effective…

  20. Teaching the Concept of Breakdown Point in Simple Linear Regression.

    ERIC Educational Resources Information Center

    Chan, Wai-Sum

    2001-01-01

    Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…

  1. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  2. On supporting students' understanding of solving linear equation by using flowchart

    NASA Astrophysics Data System (ADS)

    Toyib, Muhamad; Kusmayadi, Tri Atmojo; Riyadi

    2017-05-01

    The aim of this study was to support 7th graders to gradually understand the concepts and procedures of solving linear equation. Thirty-two 7th graders of a Junior High School in Surakarta, Indonesia were involved in this study. Design research was used as the research approach to achieve the aim. A set of learning activities in solving linear equation with one unknown were designed based on Realistic Mathematics Education (RME) approach. The activities were started by playing LEGO to find a linear equation then solve the equation by using flowchart. The results indicate that using the realistic problems, playing LEGO could stimulate students to construct linear equation. Furthermore, Flowchart used to encourage students' reasoning and understanding on the concepts and procedures of solving linear equation with one unknown.

  3. Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.

    PubMed

    Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles

    2016-02-01

    In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.

  4. Linear test bed. Volume 1: Test bed no. 1. [aerospike test bed with segmented combustor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Linear Test Bed program was to design, fabricate, and evaluation test an advanced aerospike test bed which employed the segmented combustor concept. The system is designated as a linear aerospike system and consists of a thrust chamber assembly, a power package, and a thrust frame. It was designed as an experimental system to demonstrate the feasibility of the linear aerospike-segmented combustor concept. The overall dimensions are 120 inches long by 120 inches wide by 96 inches in height. The propellants are liquid oxygen/liquid hydrogen. The system was designed to operate at 1200-psia chamber pressure, at a mixture ratio of 5.5. At the design conditions, the sea level thrust is 200,000 pounds. The complete program including concept selection, design, fabrication, component test, system test, supporting analysis and posttest hardware inspection is described.

  5. Causal discovery and inference: concepts and recent methodological advances.

    PubMed

    Spirtes, Peter; Zhang, Kun

    This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.

  6. On some approaches to model reversible magnetization processes

    NASA Astrophysics Data System (ADS)

    Chwastek, K.; Baghel, A. P. S.; Sai Ram, B.; Borowik, B.; Daniel, L.; Kulkarni, S. V.

    2018-04-01

    This paper focuses on the problem of how reversible magnetization processes are taken into account in contemporary descriptions of hysteresis curves. For comparison, three versions of the phenomenological T(x) model based on hyperbolic tangent mapping are considered. Two of them are based on summing the output of the hysteresis operator with a linear or nonlinear mapping. The third description is inspired by the concept of the product Preisach model. Total susceptibility is modulated with a magnetization-dependent function. The models are verified using measurement data for grain-oriented electrical steel. The proposed third description represents minor loops most accurately.

  7. Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue

    NASA Astrophysics Data System (ADS)

    Kree, P.; Soize, C.

    The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.

  8. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE PAGES

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2017-07-10

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  9. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  10. A two cable, six link boom crane for lunar construction

    NASA Technical Reports Server (NTRS)

    Taylor, Robert M.; Mikulas, Martin M., Jr.; Hedgepeth, John M.

    1993-01-01

    This paper presents the conceptual design and analysis of a modified crane boom and cable suspension which provide contro1 over all six degrees of freedom of a payload. Two cables pass around pulleys to form six links between the payload and boom. A linearization of the pulley mechanics was derived to create finite element models of the system. The models were experimentally verified and used to explore variations of the suspension geometry. Several crane concepts which use the suspension are discussed and illustrated.

  11. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1993-01-01

    A feedback linearization technique is used in conjunction with passivity concepts to design robust controllers for space robots. It is assumed that bounded modeling uncertainties exist in the inertia matrix and the vector representing the coriolis, centripetal, and friction forces. Under these assumptions, the controller guarantees asymptotic tracking of the joint variables. A Lagrangian approach is used to develop a dynamic model for space robots. Closed-loop simulation results are illustrated for a simple case of a single link planar manipulator with freely floating base.

  12. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  13. How are the Concepts and Theories of Acid Base Reactions Presented? Chemistry in Textbooks and as Presented by Teachers

    NASA Astrophysics Data System (ADS)

    Furió-Más, Carlos; Calatayud, María Luisa; Guisasola, Jenaro; Furió-Gómez, Cristina

    2005-09-01

    This paper investigates the views of science and scientific activity that can be found in chemistry textbooks and heard from teachers when acid base reactions are introduced to grade 12 and university chemistry students. First, the main macroscopic and microscopic conceptual models are developed. Second, we attempt to show how the existence of views of science in textbooks and of chemistry teachers contributes to an impoverished image of chemistry. A varied design has been elaborated to analyse some epistemological deficiencies in teaching acid base reactions. Textbooks have been analysed and teachers have been interviewed. The results obtained show that the teaching process does not emphasize the macroscopic presentation of acids and bases. Macroscopic and microscopic conceptual models involved in the explanation of acid base processes are mixed in textbooks and by teachers. Furthermore, the non-problematic introduction of concepts, such as the hydrolysis concept, and the linear, cumulative view of acid base theories (Arrhenius and Brönsted) were detected.

  14. Biomedical Mathematics, Unit I: Measurement, Linear Functions and Dimensional Algebra. Student Text. Revised Version, 1975.

    ERIC Educational Resources Information Center

    Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.

    This text presents lessons relating specific mathematical concepts to the ideas, skills, and tasks pertinent to the health care field. Among other concepts covered are linear functions, vectors, trigonometry, and statistics. Many of the lessons use data acquired during science experiments as the basis for exercises in mathematics. Lessons present…

  15. Acoustic emission linear pulse holography

    DOEpatents

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-10-25

    This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.

  16. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials.

    PubMed

    Norris, David C

    2017-01-01

    Background . Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA) -finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.

  17. Unveiling Galaxy Bias via the Halo Model, KiDS and GAMA

    NASA Astrophysics Data System (ADS)

    Dvornik, Andrej; Hoekstra, Henk; Kuijken, Konrad; Schneider, Peter; Amon, Alexandra; Nakajima, Reiko; Viola, Massimo; Choi, Ami; Erben, Thomas; Farrow, Daniel J.; Heymans, Catherine; Hildebrandt, Hendrik; Sifón, Cristóbal; Wang, Lingyu

    2018-06-01

    We measure the projected galaxy clustering and galaxy-galaxy lensing signals using the Galaxy And Mass Assembly (GAMA) survey and Kilo-Degree Survey (KiDS) to study galaxy bias. We use the concept of non-linear and stochastic galaxy biasing in the framework of halo occupation statistics to constrain the parameters of the halo occupation statistics and to unveil the origin of galaxy biasing. The bias function Γgm(rp), where rp is the projected comoving separation, is evaluated using the analytical halo model from which the scale dependence of Γgm(rp), and the origin of the non-linearity and stochasticity in halo occupation models can be inferred. Our observations unveil the physical reason for the non-linearity and stochasticity, further explored using hydrodynamical simulations, with the stochasticity mostly originating from the non-Poissonian behaviour of satellite galaxies in the dark matter haloes and their spatial distribution, which does not follow the spatial distribution of dark matter in the halo. The observed non-linearity is mostly due to the presence of the central galaxies, as was noted from previous theoretical work on the same topic. We also see that overall, more massive galaxies reveal a stronger scale dependence, and out to a larger radius. Our results show that a wealth of information about galaxy bias is hidden in halo occupation models. These models should therefore be used to determine the influence of galaxy bias in cosmological studies.

  18. A seal test facility for the measurement of isotropic and anisotropic linear rotordynamic characteristics

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Yang, T.; Pace, S. E.

    1989-01-01

    A new seal test facility for measuring high-pressure seal rotor-dynamic characteristics has recently been made operational at Case Western Reserve University (CWRU). This work is being sponsored by the Electric Power Research Institute (EPRI). The fundamental concept embodied in this test apparatus is a double-spool-shaft spindle which permits independent control over the spin speed and the frequency of an adjustable circular vibration orbit for both forward and backward whirl. Also, the static eccentricity between the rotating and non-rotating test seal parts is easily adjustable to desired values. By accurately measuring both dynamic radial displacement and dynamic radial force signals, over a wide range of circular orbit frequency, one is able to solve for the full linear-anisotropic model's 12 coefficients rather than the 6 coefficients of the more restrictive isotropic linear model. Of course, one may also impose the isotropic assumption in reducing test data, thereby providing a valid qualification of which seal configurations are well represented by the isotropic model and which are not. In fact, as argued in reference (1), the requirement for maintaining a symmetric total system mass matrix means that the resulting isotropic model needs 5 coefficients and the anisotropic model needs 11 coefficients.

  19. Predicting Urban Elementary Student Success and Passage on Ohio's High-Stakes Achievement Measures Using DIBELS Oral Reading Fluency and Informal Math Concepts and Applications: An Exploratory Study Employing Hierarchical Linear Modeling

    ERIC Educational Resources Information Center

    Merkle, Erich Robert

    2011-01-01

    Contemporary education is experiencing substantial reform across legislative, pedagogical, and assessment dimensions. The increase in school-based accountability systems has brought forth a culture where states, school districts, teachers, and individual students are required to demonstrate their efficacy towards improvement of the educational…

  20. Mass Fire Model Concept

    DTIC Science & Technology

    1981-05-31

    posium (International) on Combustion,, lombustion Institute, p. 965, 1965. 14. Gostlntsev, Yu.A. and L.A. Sukhanov , "Convective Column Above a inear...Fire in Homogeneous Isothermal Atmosphere," Combustion, Explo- sion, and Shock Waves, 13, p. 570, 1977. 15. Gostintsev, Yu.A., and L.A. Sukhanov ... Sukhanov , "Convectwe Column Above a Linear Fire in a Polytropic Atmosphere," Combustion, Explosion, and Siiock Waves, 14, p. 271, 1978. 17

  1. Free (Reactionless) Torque Generation—Or Free Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Djordjev, Bojidar

    2010-01-01

    The basic principle in Newtonian Mechanics is based upon equal and opposite forces. Placing the vectors of velocity, acceleration, force and momentum of interacting objects along a single line satisfies the claim it is a linear or a 1-D concept. Classical Mechanics states that there are two main kinds of motion, linear and angular motion. Similarly placing the vectors of angular velocity, angular acceleration, torque and angular momentum along a line in the case of rotation in fact brings a plane 2-D interaction to the well known 1-D Newtonian concept. This adaptation transforms Classical Mechanics into a 1-D concept as well and presents a conformation that the linear concept is the only possible one. The Laws of Conservation of Momentum and Angular Momentum are results of the 1-D concept. But the world contains 3 geometrical spatial dimensions. Within the 3-D world there can exist 1-D, 2-D and 3-D kinds of interaction. The question is how to believe that the 3-D world can really be composed of a 1-D interaction or interactions made equal to the 1-D concept only? Examine a gyroscope—the only one mechanical device that is capable of performng 3-D behavior. The problem is that a gyroscope cannot perform three permanent and unidirectional torques that are fixed in space acting about perpendicular axes. This impossibility conforms to a 1-D concept. The idea is to find a solution that can be achieved for the 3-D concept.

  2. Improving Students’ Science Process Skills through Simple Computer Simulations on Linear Motion Conceptions

    NASA Astrophysics Data System (ADS)

    Siahaan, P.; Suryani, A.; Kaniawati, I.; Suhendi, E.; Samsudin, A.

    2017-02-01

    The purpose of this research is to identify the development of students’ science process skills (SPS) on linear motion concept by utilizing simple computer simulation. In order to simplify the learning process, the concept is able to be divided into three sub-concepts: 1) the definition of motion, 2) the uniform linear motion and 3) the uniformly accelerated motion. This research was administered via pre-experimental method with one group pretest-posttest design. The respondents which were involved in this research were 23 students of seventh grade in one of junior high schools in Bandung City. The improving process of students’ science process skill is examined based on normalized gain analysis from pretest and posttest scores for all sub-concepts. The result of this research shows that students’ science process skills are dramatically improved by 47% (moderate) on observation skill; 43% (moderate) on summarizing skill, 70% (high) on prediction skill, 44% (moderate) on communication skill and 49% (moderate) on classification skill. These results clarify that the utilizing simple computer simulations in physics learning is be able to improve overall science skills at moderate level.

  3. SAM Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    The System Analysis Module (SAM) is an advanced and modern system analysis tool being developed at Argonne National Laboratory under the U.S. DOE Office of Nuclear Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM development aims for advances in physical modeling, numerical methods, and software engineering to enhance its user experience and usability for reactor transient analyses. To facilitate the code development, SAM utilizes an object-oriented application framework (MOOSE), and its underlying meshing and finite-element library (libMesh) and linear and non-linear solvers (PETSc), to leverage modern advanced software environments and numerical methods. SAM focuses on modeling advanced reactormore » concepts such as SFRs (sodium fast reactors), LFRs (lead-cooled fast reactors), and FHRs (fluoride-salt-cooled high temperature reactors) or MSRs (molten salt reactors). These advanced concepts are distinguished from light-water reactors in their use of single-phase, low-pressure, high-temperature, and low Prandtl number (sodium and lead) coolants. As a new code development, the initial effort has been focused on modeling and simulation capabilities of heat transfer and single-phase fluid dynamics responses in Sodium-cooled Fast Reactor (SFR) systems. The system-level simulation capabilities of fluid flow and heat transfer in general engineering systems and typical SFRs have been verified and validated. This document provides the theoretical and technical basis of the code to help users understand the underlying physical models (such as governing equations, closure models, and component models), system modeling approaches, numerical discretization and solution methods, and the overall capabilities in SAM. As the code is still under ongoing development, this SAM Theory Manual will be updated periodically to keep it consistent with the state of the development.« less

  4. The conditional resampling model STARS: weaknesses of the modeling concept and development

    NASA Astrophysics Data System (ADS)

    Menz, Christoph

    2016-04-01

    The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.

  5. LMI Based Robust Blood Glucose Regulation in Type-1 Diabetes Patient with Daily Multi-meal Ingestion

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Bhattacharjee, A.; Sutradhar, A.

    2014-04-01

    This paper illustrates the design of a robust output feedback H ∞ controller for the nonlinear glucose-insulin (GI) process in a type-1 diabetes patient to deliver insulin through intravenous infusion device. The H ∞ design specification have been realized using the concept of linear matrix inequality (LMI) and the LMI approach has been used to quadratically stabilize the GI process via output feedback H ∞ controller. The controller has been designed on the basis of full 19th order linearized state-space model generated from the modified Sorensen's nonlinear model of GI process. The resulting controller has been tested with the nonlinear patient model (the modified Sorensen's model) in presence of patient parameter variations and other uncertainty conditions. The performance of the controller was assessed in terms of its ability to track the normoglycemic set point of 81 mg/dl with a typical multi-meal disturbance throughout a day that yields robust performance and noise rejection.

  6. Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia

    NASA Astrophysics Data System (ADS)

    Caraka, R. E.; Supari; Tahmid, M.

    2018-04-01

    Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.

  7. Examining the influence of link function misspecification in conventional regression models for developing crash modification factors.

    PubMed

    Wu, Lingtao; Lord, Dominique

    2017-05-01

    This study further examined the use of regression models for developing crash modification factors (CMFs), specifically focusing on the misspecification in the link function. The primary objectives were to validate the accuracy of CMFs derived from the commonly used regression models (i.e., generalized linear models or GLMs with additive linear link functions) when some of the variables have nonlinear relationships and quantify the amount of bias as a function of the nonlinearity. Using the concept of artificial realistic data, various linear and nonlinear crash modification functions (CM-Functions) were assumed for three variables. Crash counts were randomly generated based on these CM-Functions. CMFs were then derived from regression models for three different scenarios. The results were compared with the assumed true values. The main findings are summarized as follows: (1) when some variables have nonlinear relationships with crash risk, the CMFs for these variables derived from the commonly used GLMs are all biased, especially around areas away from the baseline conditions (e.g., boundary areas); (2) with the increase in nonlinearity (i.e., nonlinear relationship becomes stronger), the bias becomes more significant; (3) the quality of CMFs for other variables having linear relationships can be influenced when mixed with those having nonlinear relationships, but the accuracy may still be acceptable; and (4) the misuse of the link function for one or more variables can also lead to biased estimates for other parameters. This study raised the importance of the link function when using regression models for developing CMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Assessment of Advanced Logistics Delivery System (ALDS) Launch Systems Concepts

    DTIC Science & Technology

    2004-10-01

    highest force vs. rotor weight required, allows much higher magnetic field generation than the linear induction or linear permanent magnet motors , and...provides the highest force vs. rotor weight required, allows much higher magnetic generation than the linear induction or linear permanent magnet motors , and

  9. Health monitoring system for transmission shafts based on adaptive parameter identification

    NASA Astrophysics Data System (ADS)

    Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.

    2018-05-01

    A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.

  10. Analysis Balance Parameter of Optimal Ramp metering

    NASA Astrophysics Data System (ADS)

    Li, Y.; Duan, N.; Yang, X.

    2018-05-01

    Ramp metering is a motorway control method to avoid onset congestion through limiting the access of ramp inflows into the main road of the motorway. The optimization model of ramp metering is developed based upon cell transmission model (CTM). With the piecewise linear structure of CTM, the corresponding motorway traffic optimization problem can be formulated as a linear programming (LP) problem. It is known that LP problem can be solved by established solution algorithms such as SIMPLEX or interior-point methods for the global optimal solution. The commercial software (CPLEX) is adopted in this study to solve the LP problem within reasonable computational time. The concept is illustrated through a case study of the United Kingdom M25 Motorway. The optimal solution provides useful insights and guidances on how to manage motorway traffic in order to maximize the corresponding efficiency.

  11. A technique for measuring vertically and horizontally polarized microwave brightness temperatures using electronic polarization-basis rotation

    NASA Technical Reports Server (NTRS)

    Gasiewski, Albin J.

    1992-01-01

    This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).

  12. Radiation signatures in childhood thyroid cancers after the Chernobyl accident: possible roles of radiation in carcinogenesis.

    PubMed

    Suzuki, Keiji; Mitsutake, Norisato; Saenko, Vladimir; Yamashita, Shunichi

    2015-02-01

    After the Tokyo Electric Power Company Fukushima Daiichi nuclear power plant accident, cancer risk from low-dose radiation exposure has been deeply concerning. The linear no-threshold model is applied for the purpose of radiation protection, but it is a model based on the concept that ionizing radiation induces stochastic oncogenic alterations in the target cells. As the elucidation of the mechanism of radiation-induced carcinogenesis is indispensable to justify the concept, studies aimed at the determination of molecular changes associated with thyroid cancers among children who suffered effects from the Chernobyl nuclear accident will be overviewed. We intend to discuss whether any radiation signatures are associated with radiation-induced childhood thyroid cancers. © 2014 The Authors. Cancer Science published by Wiley Publishing Asia Pty Ltd on behalf of Japanese Cancer Association.

  13. Design and analysis of low-loss linear analog phase modulator for deep space spacecraft X-band transponder (DST) application

    NASA Technical Reports Server (NTRS)

    Mysoor, Narayan R.; Mueller, Robert O.

    1991-01-01

    This paper summarizes the design concepts, analyses, and the development of an X-band transponder low-loss linear phase modulator for deep space spacecraft applications. A single section breadboard circulator-coupled reflection phase modulator has been analyzed, fabricated, and evaluated. Two- and three-cascaded sections have been modeled and simulations performed to provide an X-band DST phase modulator with +/- 2.5 radians of peak phase deviation to accommodate down-link signal modulation with composite telemetry data and ranging with a deviation linearity tolerance +/- 8 percent and insertion loss of less than 10 +/- 0.5 dB. A two-section phase modulator using constant gamma hyperabrupt varactors and an efficient modulator driver circuit was breadboarded. The measured results satisfy the DST phase modulator requirements, and excellent agreement with the predicted results.

  14. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  15. High-Lift Flight Tunnel - Phase II Report. Phase 2 Report

    NASA Technical Reports Server (NTRS)

    Lofftus, David; Lund, Thomas; Rote, Donald; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    The High-Lift Flight Tunnel (HiLiFT) concept is a revolutionary approach to aerodynamic ground testing. This concept utilizes magnetic levitation and linear motors to propel an aerodynamic model through a tube containing a quiescent test medium. This medium (nitrogen) is cryogenic and pressurized to achieve full flight Reynolds numbers higher than any existing ground test facility world-wide for the range of 0.05 to 0.50 Mach. The results of the Phase II study provide excellent assurance that the HiLiFT concept will provide a valuable low-speed, high Reynolds number ground test facility. The design studies concluded that the HiLiFT facility is feasible to build and operate and the analytical studies revealed no insurmountable difficulties to realizing a practical high Reynolds number ground test facility. It was determined that a national HiLiFT facility, including development, would cost approximately $400M and could be operational by 2013 if fully funded. Study participants included National Aeronautics and Space Administration Langley Research Center as the Program Manager and MSE Technology Applications, Inc., (MSE) of Butte, Montana as the prime contractor and study integrator. MSE#s subcontractors included the University of Texas at Arlington for aerodynamic analyses and the Argonne National Laboratory for magnetic levitation and linear motor technology support.

  16. Investigating Students' Modes of Thinking in Linear Algebra: The Case of Linear Independence

    ERIC Educational Resources Information Center

    Çelik, Derya

    2015-01-01

    Linear algebra is one of the most challenging topics to learn and teach in many countries. To facilitate the teaching and learning of linear algebra, priority should be given to epistemologically analyze the concepts that the undergraduate students have difficulty in conceptualizing and to define their ways of reasoning in linear algebra. After…

  17. Power accounting of plasma discharges in the linear device Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Showers, M.; Piotrowicz, P. A.; Beers, C. J.; Biewer, T. M.; Caneses, J.; Canik, J.; Caughman, J. B. O.; Donovan, D. C.; Goulding, R. H.; Lumsdaine, A.; Kafle, N.; Owen, L. W.; Rapp, J.; Ray, H.

    2018-06-01

    Plasma material interaction (PMI) studies are crucial to the successful development of future fusion reactors. Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a prototype design for the MPEX, a steady-state linear device being developed to study PMI. The primary purpose of Proto-MPEX is developing the plasma heating source concepts for MPEX. A power accounting study of Proto-MPEX works to identify machine operating parameters that could improve its performance, thereby increasing its PMI research capabilities, potentially impacting the MPEX design concept. To build a comprehensive power balance, an analysis of the helicon region has been performed implementing a diagnostic suite and software modeling to identify mechanisms and locations of heat loss from the main plasma. Of the 106.3 kW of input power, up to 90.5% of the power has been accounted for in the helicon region. When the analysis was extended to encompass the device to its end plates, 49.2% of the input power was accounted for and verified diagnostically. Areas requiring further diagnostic analysis are identified. The required improvements will be implemented in future work. The data acquisition and analysis processes will be streamlined to form a working model for future power balance studies of Proto-MPEX. ).

  18. Dynamic modelling and simulation of CSP plant based on supercritical carbon dioxide closed Brayton cycle

    NASA Astrophysics Data System (ADS)

    Hakkarainen, Elina; Sihvonen, Teemu; Lappalainen, Jari

    2017-06-01

    Supercritical carbon dioxide (sCO2) has recently gained a lot of interest as a working fluid in different power generation applications. For concentrated solar power (CSP) applications, sCO2 provides especially interesting option if it could be used both as the heat transfer fluid (HTF) in the solar field and as the working fluid in the power conversion unit. This work presents development of a dynamic model of CSP plant concept, in which sCO2 is used for extracting the solar heat in Linear Fresnel collector field, and directly applied as the working fluid in the recuperative Brayton cycle; these both in a single flow loop. We consider the dynamic model is capable to predict the system behavior in typical operational transients in a physically plausible way. The novel concept was tested through simulation cases under different weather conditions. The results suggest that the concept can be successfully controlled and operated in the supercritical region to generate electric power during the daytime, and perform start-up and shut down procedures in order to stay overnight in sub-critical conditions. Besides the normal daily operation, the control system was demonstrated to manage disturbances due to sudden irradiance changes.

  19. Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2014-01-01

    Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology

  20. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  1. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  2. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    NASA Astrophysics Data System (ADS)

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  3. Linear algebraic theory of partial coherence: discrete fields and measures of partial coherence.

    PubMed

    Ozaktas, Haldun M; Yüksel, Serdar; Kutay, M Alper

    2002-08-01

    A linear algebraic theory of partial coherence is presented that allows precise mathematical definitions of concepts such as coherence and incoherence. This not only provides new perspectives and insights but also allows us to employ the conceptual and algebraic tools of linear algebra in applications. We define several scalar measures of the degree of partial coherence of an optical field that are zero for full incoherence and unity for full coherence. The mathematical definitions are related to our physical understanding of the corresponding concepts by considering them in the context of Young's experiment.

  4. A flight-test methodology for identification of an aerodynamic model for a V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E., Jr.; Mcnally, B. David

    1988-01-01

    Described is a flight test methodology for developing a data base to be used to identify an aerodynamic model of a vertical and short takeoff and landing (V/STOL) fighter aircraft. The aircraft serves as a test bed at Ames for ongoing research in advanced V/STOL control and display concepts. The flight envelope to be modeled includes hover, transition to conventional flight, and back to hover, STOL operation, and normaL cruise. Although the aerodynamic model is highly nonlinear, it has been formulated to be linear in the parameters to be identified. Motivation for the flight test methodology advocated in this paper is based on the choice of a linear least-squares method for model identification. The paper covers elements of the methodology from maneuver design to the completed data base. Major emphasis is placed on the use of state estimation with tracking data to ensure consistency among maneuver variables prior to their entry into the data base. The design and processing of a typical maneuver is illustrated.

  5. Stationary Waves of the Ice Age Climate.

    NASA Astrophysics Data System (ADS)

    Cook, Kerry H.; Held, Isaac M.

    1988-08-01

    A linearized, steady state, primitive equation model is used to simulate the climatological zonal asymmetries (stationary eddies) in the wind and temperature fields of the 18 000 YBP climate during winter. We compare these results with the eddies simulated in the ice age experiments of Broccoli and Manabe, who used CLIMAP boundary conditions and reduced atmospheric CO2 in an atmospheric general circulation model (GCM) coupled with a static mixed layer ocean model. The agreement between the models is good, indicating that the linear model can be used to evaluate the relative influences of orography, diabatic heating, and transient eddy heat and momentum transports in generating stationary waves. We find that orographic forcing dominates in the ice age climate. The mechanical influence of the continental ice sheets on the atmosphere is responsible for most of the changes between the present day and ice age stationary eddies. This concept of the ice age climate is complicated by the sensitivity of the stationary eddies to the large increase in the magnitude of the zonal mean meridional temperature gradient simulated in the ice age GCM.

  6. A heat transfer model for a hot helium airship

    NASA Astrophysics Data System (ADS)

    Rapert, R. M.

    1987-06-01

    Basic heat transfer empirical and analytic equations are applied to a double envelope airship concept which uses heated Helium in the inner envelope to augment and control gross lift. The convective and conductive terms lead to a linear system of five equations for the concept airship, with the nonlinear radiation terms included by an iterative solution process. The graphed results from FORTRAN program solutions are presented for the variables of interest. These indicate that a simple use of airship engine exhaust heat gives more than a 30 percent increase in gross airship lift. Possibly more than 100 percent increase can be achieved if a 'stream injection' heating system, with associated design problems, is used.

  7. The human body as field of conflict between discourses.

    PubMed

    Kimsma, Gerrit K; Leeuwen, Evert van

    2005-01-01

    The approach to AIDS as a disease and a threat for social discrimination is used as an example to illustrate a conceptual thesis. This thesis is a claim that concerns what we call a medical issue or not, what is medicalised or needs to be demedicalised. In the friction between medicalisation and demedicalisation as discursive strategies the latter approach can only be effected through the employment of discourses or discursive strategies other than medicine, such as those of the law and of economics. These discourses each realise different values, promote a different subject, and have a different concept of man. The concept of discourse is briefly outlined against concepts such as the linear growth concept of science and the growth model of science as changes in paradigm. The issue of testing for AIDS shows a conflict between the medical and the legal discourse and illustrates the title of our contribution: the human body as field of conflict between discourses.

  8. Introducing Linear Functions: An Alternative Statistical Approach

    ERIC Educational Resources Information Center

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  9. Beam dynamics simulation of a double pass proton linear accelerator

    DOE PAGES

    Hwang, Kilean; Qiang, Ji

    2017-04-03

    A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fullymore » 3D space-charge effects through the entire accelerator system.« less

  10. Feedback-Equivalence of Nonlinear Systems with Applications to Power System Equations.

    NASA Astrophysics Data System (ADS)

    Marino, Riccardo

    The key concept of the dissertation is feedback equivalence among systems affine in control. Feedback equivalence to linear systems in Brunovsky canonical form and the construction of the corresponding feedback transformation are used to: (i) design a nonlinear regulator for a detailed nonlinear model of a synchronous generator connected to an infinite bus; (ii) establish which power system network structures enjoy the feedback linearizability property and design a stabilizing control law for these networks with a constraint on the control space which comes from the use of d.c. lines. It is also shown that the feedback linearizability property allows the use of state feedback to contruct a linear controllable system with a positive definite linear Hamiltonian structure for the uncontrolled part if the state space is even; a stabilizing control law is derived for such systems. Feedback linearizability property is characterized by the involutivity of certain nested distributions for strongly accessible analytic systems; if the system is defined on a manifold M diffeomorphic to the Euclidean space, it is established that the set where the property holds is a submanifold open and dense in M. If an analytic output map is defined, a set of nested involutive distributions can be always defined and that allows the introduction of an observability property which is the dual concept, in some sense, to feedback linearizability: the goal is to investigate when a nonlinear system affine in control with an analytic output map is feedback equivalent to a linear controllable and observable system. Finally a nested involutive structure of distributions is shown to guarantee the existence of a state feedback that takes a nonlinear system affine in control to a single input one, both feedback equivalent to linear controllable systems, preserving one controlled vector field.

  11. Panel methods: An introduction

    NASA Technical Reports Server (NTRS)

    Erickson, Larry L.

    1990-01-01

    Panel methods are numerical schemes for solving (the Prandtl-Glauert equation) for linear, inviscid, irrotational flow about aircraft flying at subsonic or supersonic speeds. The tools at the panel-method user's disposal are (1) surface panels of source-doublet-vorticity distributions that can represent nearly arbitrary geometry, and (2) extremely versatile boundary condition capabilities that can frequently be used for creative modeling. Panel-method capabilities and limitations, basic concepts common to all panel-method codes, different choices that were made in the implementation of these concepts into working computer programs, and various modeling techniques involving boundary conditions, jump properties, and trailing wakes are discussed. An approach for extending the method to nonlinear transonic flow is also presented. Three appendices supplement the main test. In appendix 1, additional detail is provided on how the basic concepts are implemented into a specific computer program (PANAIR). In appendix 2, it is shown how to evaluate analytically the fundamental surface integral that arises in the expressions for influence-coefficients, and evaluate its jump property. In appendix 3, a simple example is used to illustrate the so-called finite part of the improper integrals.

  12. Primordial black holes in linear and non-linear regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allahyari, Alireza; Abolhasani, Ali Akbar; Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we arguemore » that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.« less

  13. Design, synthesis and biological evaluation of (S)-valine thiazole-derived cyclic and non-cyclic peptidomimetic oligomers as modulators of human P-glycoprotein (ABCB1)

    PubMed Central

    Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.

    2014-01-01

    Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265

  14. Design, synthesis, and biological evaluation of (S)-valine thiazole-derived cyclic and noncyclic peptidomimetic oligomers as modulators of human P-glycoprotein (ABCB1).

    PubMed

    Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E; Patel, Bhargav A; Ambudkar, Suresh V; Talele, Tanaji T

    2014-01-03

    Multidrug resistance caused by ATP binding cassette transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure in cancer chemotherapy. Previously, selenazole-containing cyclic peptides were reported as P-gp inhibitors and were also used for co-crystallization with mouse P-gp, which has 87 % homology to human P-gp. It has been reported that human P-gp can simultaneously accommodate two to three moderately sized molecules at the drug binding pocket. Our in silico analysis, based on the homology model of human P-gp, spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at the drug binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity, and structural form (linear or cyclic) of valine-derived thiazole peptides that can be accommodated in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear (13) and cyclic trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 =1.5 μM). As the cyclic trimer and linear trimer compounds are equipotent, future studies should focus on noncyclic counterparts of cyclic peptides maintaining linear trimer length. A binding model of the linear trimer 13 within the drug binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the noncyclic form. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Developing a comprehensive framework of community integration for people with acquired brain injury: a conceptual analysis.

    PubMed

    Shaikh, Nusratnaaz M; Kersten, Paula; Siegert, Richard J; Theadom, Alice

    2018-03-06

    Despite increasing emphasis on the importance of community integration as an outcome for acquired brain injury (ABI), there is still no consensus on the definition of community integration. The aim of this study was to complete a concept analysis of community integration in people with ABI. The method of concept clarification was used to guide concept analysis of community integration based on a literature review. Articles were included if they explored community integration in people with ABI. Data extraction was performed by the initial coding of (1) the definition of community integration used in the articles, (2) attributes of community integration recognized in the articles' findings, and (3) the process of community integration. This information was synthesized to develop a model of community integration. Thirty-three articles were identified that met the inclusion criteria. The construct of community integration was found to be a non-linear process reflecting recovery over time, sequential goals, and transitions. Community integration was found to encompass six components including: independence, sense of belonging, adjustment, having a place to live, involved in a meaningful occupational activity, and being socially connected into the community. Antecedents to community integration included individual, injury-related, environmental, and societal factors. The findings of this concept analysis suggest that the concept of community integration is more diverse than previously recognized. New measures and rehabilitation plans capturing all attributes of community integration are needed in clinical practice. Implications for rehabilitation Understanding of perceptions and lived experiences of people with acquired brain injury through this analysis provides basis to ensure rehabilitation meets patients' needs. This model highlights the need for clinicians to be aware and assess the role of antecedents as well as the attributes of community integration itself to ensure all aspects are addressed in in a manner that will enhance the recovery and improve the level of integration into the community. The finding that community integration is a non-linear process also highlights the need for rehabilitation professionals to review and revise plans over time in response to a person's changing circumstances and recovery journey. This analysis provides the groundwork for an operational model of community integration for the development of a measure of community integration that assesses all six attributes revealed in this review not recognized in previous frameworks.

  16. Linear signatures in nonlinear gyrokinetics: interpreting turbulence with pseudospectra

    DOE PAGES

    Hatch, D. R.; Jenko, F.; Navarro, A. Banon; ...

    2016-07-26

    A notable feature of plasma turbulence is its propensity to retain features of the underlying linear eigenmodes in a strongly turbulent state—a property that can be exploited to predict various aspects of the turbulence using only linear information. In this context, this work examines gradient-driven gyrokinetic plasma turbulence through three lenses—linear eigenvalue spectra, pseudospectra, and singular value decomposition (SVD). We study a reduced gyrokinetic model whose linear eigenvalue spectra include ion temperature gradient driven modes, stable drift waves, and kinetic modes representing Landau damping. The goal is to characterize in which ways, if any, these familiar ingredients are manifest inmore » the nonlinear turbulent state. This pursuit is aided by the use of pseudospectra, which provide a more nuanced view of the linear operator by characterizing its response to perturbations. We introduce a new technique whereby the nonlinearly evolved phase space structures extracted with SVD are linked to the linear operator using concepts motivated by pseudospectra. Using this technique, we identify nonlinear structures that have connections to not only the most unstable eigenmode but also subdominant modes that are nonlinearly excited. The general picture that emerges is a system in which signatures of the linear physics persist in the turbulence, albeit in ways that cannot be fully explained by the linear eigenvalue approach; a non-modal treatment is necessary to understand key features of the turbulence.« less

  17. Helicon plasma ion temperature measurements and observed ion cyclotron heating in proto-MPEX

    NASA Astrophysics Data System (ADS)

    Beers, C. J.; Goulding, R. H.; Isler, R. C.; Martin, E. H.; Biewer, T. M.; Caneses, J. F.; Caughman, J. B. O.; Kafle, N.; Rapp, J.

    2018-01-01

    The Prototype-Material Plasma Exposure eXperiment (Proto-MPEX) linear plasma device is a test bed for exploring and developing plasma source concepts to be employed in the future steady-state linear device Material Plasma Exposure eXperiment (MPEX) that will study plasma-material interactions for the nuclear fusion program. The concept foresees using a helicon plasma source supplemented with electron and ion heating systems to reach necessary plasma conditions. In this paper, we discuss ion temperature measurements obtained from Doppler broadening of spectral lines from argon ion test particles. Plasmas produced with helicon heating alone have average ion temperatures downstream of the Helicon antenna in the range of 3 ± 1 eV; ion temperature increases to 10 ± 3 eV are observed with the addition of ion cyclotron heating (ICH). The temperatures are higher at the edge than the center of the plasma either with or without ICH. This type of profile is observed with electrons as well. A one-dimensional RF antenna model is used to show where heating of the plasma is expected.

  18. A model-free characterization of recurrences in stationary time series

    NASA Astrophysics Data System (ADS)

    Chicheportiche, Rémy; Chakraborti, Anirban

    2017-05-01

    Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.

  19. Multiresource analysis and information system concepts for incorporating LANDSAT and GIS technology into large area forest surveys. [South Carolina

    NASA Technical Reports Server (NTRS)

    Langley, P. G.

    1981-01-01

    A method of relating different classifications at each stage of a multistage, multiresource inventory using remotely sensed imagery is discussed. A class transformation matrix allowing the conversion of a set of proportions at one stage, to a set of proportions at the subsequent stage through use of a linear model, is described. The technique was tested by applying it to Kershaw County, South Carolina. Unsupervised LANDSAT spectral classifications were correlated with interpretations of land use aerial photography, the correlations employed to estimate land use classifications using the linear model, and the land use proportions used to stratify current annual increment (CAI) field plot data to obtain a total CAI for the county. The estimate differed by 1% from the published figure for land use. Potential sediment loss and a variety of land use classifications were also obtained.

  20. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  1. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  2. Entanglement evaluation with atomic Fisher information

    NASA Astrophysics Data System (ADS)

    Obada, A.-S. F.; Abdel-Khalek, S.

    2010-02-01

    In this paper, the concept of atomic Fisher information (AFI) is introduced. The marginal distributions of the AFI are defined. This quantity is used as a parameter of entanglement and compared with linear and atomic Wehrl entropies of the two-level atom. The evolution of the atomic Fisher information and atomic Wehrl entropy for only the pure state (or dissipation-free) of the Jaynes-Cummings model is analyzed. We demonstrate the connections between these measures.

  3. Reconfiguration Schemes for Fault-Tolerant Processor Arrays

    DTIC Science & Technology

    1992-10-15

    partially notion of linear schedule are easily related to similar ordered subset of a multidimensional integer lattice models and concepts used in [11-[131...and several other (called indec set). The points of this lattice correspond works. to (i.e.. are the indices of) computations, and the partial There are...These data dependencies are represented as vectors that of all computations of the algorithm is to be minimized. connect points of the lattice . If a

  4. Blocky inversion of multichannel elastic impedance for elastic parameters

    NASA Astrophysics Data System (ADS)

    Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza

    2018-04-01

    Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.

  5. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.

    PubMed

    Mazandarani, Mehran; Pariz, Naser

    2018-05-01

    This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Computing anticipatory systems with incursion and hyperincursion

    NASA Astrophysics Data System (ADS)

    Dubois, Daniel M.

    1998-07-01

    An anticipatory system is a system which contains a model of itself and/or of its environment in view of computing its present state as a function of the prediction of the model. With the concepts of incursion and hyperincursion, anticipatory discrete systems can be modelled, simulated and controlled. By definition an incursion, an inclusive or implicit recursion, can be written as: x(t+1)=F[…,x(t-1),x(t),x(t+1),…] where the value of a variable x(t+1) at time t+1 is a function of this variable at past, present and future times. This is an extension of recursion. Hyperincursion is an incursion with multiple solutions. For example, chaos in the Pearl-Verhulst map model: x(t+1)=a.x(t).[1-x(t)] is controlled by the following anticipatory incursive model: x(t+1)=a.x(t).[1-x(t+1)] which corresponds to the differential anticipatory equation: dx(t)/dt=a.x(t).[1-x(t+1)]-x(t). The main part of this paper deals with the discretisation of differential equation systems of linear and non-linear oscillators. The non-linear oscillator is based on the Lotka-Volterra equations model. The discretisation is made by incursion. The incursive discrete equation system gives the same stability condition than the original differential equations without numerical instabilities. The linearisation of the incursive discrete non-linear Lotka-Volterra equation system gives rise to the classical harmonic oscillator. The incursive discretisation of the linear oscillator is similar to define backward and forward discrete derivatives. A generalized complex derivative is then considered and applied to the harmonic oscillator. Non-locality seems to be a property of anticipatory systems. With some mathematical assumption, the Schrödinger quantum equation is derived for a particle in a uniform potential. Finally an hyperincursive system is given in the case of a neural stack memory.

  7. A control-theory model for human decision-making

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.

    1971-01-01

    A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.

  8. A fundamental study revisited: Quantitative evidence for territory quality in oystercatchers (Haematopus ostralegus) using GPS data loggers.

    PubMed

    Schwemmer, Philipp; Weiel, Stefan; Garthe, Stefan

    2017-01-01

    A fundamental study by Ens et al. (1992, Journal of Animal Ecology , 61, 703) developed the concept of two different nest-territory qualities in Eurasian oystercatchers ( Haematopus ostralegus , L.), resulting in different reproductive successes. "Resident" oystercatchers use breeding territories close to the high-tide line and occupy adjacent foraging territories on mudflats. "Leapfrog" oystercatchers breed further away from their foraging territories. In accordance with this concept, we hypothesized that both foraging trip duration and trip distance from the high-tide line to the foraging territory would be linearly related to distance between the nest site and the high tide line. We also expected tidal stage and time of day to affect this relationship. The former study used visual observations of marked oystercatchers, which could not be permanently tracked. This concept model can now be tested using miniaturized GPS devices able to record data at high temporal and spatial resolutions. Twenty-nine oystercatchers from two study sites were equipped with GPS devices during the incubation periods (however, not during chick rearing) over 3 years, providing data for 548 foraging trips. Trip distances from the high-tide line were related to distance between the nest and high-tide line. Tidal stage and time of day were included in a mixing model. Foraging trip distance, but not duration (which was likely more impacted by intake rate), increased with increasing distance between the nest and high-tide line. There was a site-specific effect of tidal stage on both trip parameters. Foraging trip duration, but not distance, was significantly longer during the hours of darkness. Our findings support and additionally quantify the previously developed concept. Furthermore, rather than separating breeding territory quality into two discrete classes, this classification should be extended by the linear relationship between nest-site and foraging location. Finally, oystercatcher's foraging territories overlapped strongly in areas of high food abundance.

  9. Assessment of variation in the alberta context tool: the contribution of unit level contextual factors and specialty in Canadian pediatric acute care settings

    PubMed Central

    2011-01-01

    Background There are few validated measures of organizational context and none that we located are parsimonious and address modifiable characteristics of context. The Alberta Context Tool (ACT) was developed to meet this need. The instrument assesses 8 dimensions of context, which comprise 10 concepts. The purpose of this paper is to report evidence to further the validity argument for ACT. The specific objectives of this paper are to: (1) examine the extent to which the 10 ACT concepts discriminate between patient care units and (2) identify variables that significantly contribute to between-unit variation for each of the 10 concepts. Methods 859 professional nurses (844 valid responses) working in medical, surgical and critical care units of 8 Canadian pediatric hospitals completed the ACT. A random intercept, fixed effects hierarchical linear modeling (HLM) strategy was used to quantify and explain variance in the 10 ACT concepts to establish the ACT's ability to discriminate between units. We ran 40 models (a series of 4 models for each of the 10 concepts) in which we systematically assessed the unique contribution (i.e., error variance reduction) of different variables to between-unit variation. First, we constructed a null model in which we quantified the variance overall, in each of the concepts. Then we controlled for the contribution of individual level variables (Model 1). In Model 2, we assessed the contribution of practice specialty (medical, surgical, critical care) to variation since it was central to construction of the sampling frame for the study. Finally, we assessed the contribution of additional unit level variables (Model 3). Results The null model (unadjusted baseline HLM model) established that there was significant variation between units in each of the 10 ACT concepts (i.e., discrimination between units). When we controlled for individual characteristics, significant variation in the 10 concepts remained. Assessment of the contribution of specialty to between-unit variation enabled us to explain more variance (1.19% to 16.73%) in 6 of the 10 ACT concepts. Finally, when we assessed the unique contribution of the unit level variables available to us, we were able to explain additional variance (15.91% to 73.25%) in 7 of the 10 ACT concepts. Conclusion The findings reported here represent the third published argument for validity of the ACT and adds to the evidence supporting its use to discriminate patient care units by all 10 contextual factors. We found evidence of relationships between a variety of individual and unit-level variables that explained much of this between-unit variation for each of the 10 ACT concepts. Future research will include examination of the relationships between the ACT's contextual factors and research utilization by nurses and ultimately the relationships between context, research utilization, and outcomes for patients. PMID:21970404

  10. New concept for in-line OLED manufacturing

    NASA Astrophysics Data System (ADS)

    Hoffmann, U.; Landgraf, H.; Campo, M.; Keller, S.; Koening, M.

    2011-03-01

    A new concept of a vertical In-Line deposition machine for large area white OLED production has been developed. The concept targets manufacturing on large substrates (>= Gen 4, 750 x 920 mm2) using linear deposition source achieving a total material utilization of >= 50 % and tact time down to 80 seconds. The continuously improved linear evaporation sources for the organic material achieve thickness uniformity on Gen 4 substrate of better than +/- 3 % and stable deposition rates down to less than 0.1 nm m/min and up to more than 100 nm m/min. For Lithium-Fluoride but also for other high evaporation temperature materials like Magnesium or Silver a linear source with uniformity better than +/- 3 % has been developed. For Aluminum we integrated a vertical oriented point source using wire feed to achieve high (> 150 nm m/min) and stable deposition rates. The machine concept includes a new vertical vacuum handling and alignment system for Gen 4 shadow masks. A complete alignment cycle for the mask can be done in less than one minute achieving alignment accuracy in the range of several 10 μm.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Kilean; Qiang, Ji

    A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fullymore » 3D space-charge effects through the entire accelerator system.« less

  12. Azimuth cut-off model for significant wave height investigation along coastal water of Kuala Terengganu, Malaysia

    NASA Astrophysics Data System (ADS)

    Marghany, Maged; Ibrahim, Zelina; Van Genderen, Johan

    2002-11-01

    The present work is used to operationalize the azimuth cut-off concept in the study of significant wave height. Three ERS-1 images have been used along the coastal waters of Terengganu, Malaysia. The quasi-linear transform was applied to map the SAR wave spectra into real ocean wave spectra. The azimuth cut-off was then used to model the significant wave height. The results show that azimuth cut-off varied with the different period of the ERS-1 images. This is because of the fact that the azimuth cut-off is a function of wind speed and significant wave height. It is of interest to find that the significant wave height modeled from azimuth cut-off is in good relation with ground wave conditions. It can be concluded that ERS-1 can be used as a monitoring tool in detecting the significant wave height variation. The azimuth cut-off can be used to model the significant wave height. This means that the quasi-linear transform could be a good application to significant wave height variation during different seasons.

  13. From Reactor to Rheology in LDPE Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Read, Daniel J.; Das, Chinmay; Auhl, Dietmar

    2008-07-07

    In recent years the association between molecular structure and linear rheology has been established and well-understood through the tube concept and its extensions for well-characterized materials (e.g. McLeish, Adv. Phys. 2002). However, for industrial branched polymeric material at processing conditions this piece of information is missing. A large number of phenomenological models have been developed to describe the nonlinear response of polymers. But none of these models takes into account the underlying molecular structure, leading to a fitting procedure with arbitrary fitting parameters. The goal of applied molecular rheology is a predictive scheme that runs in its entirety from themore » molecular structure from the reactor to the non-linear rheology of the resin. In our approach, we use a model for the industrial reactor to explicitly generate the molecular structure ensemble of LDPE's, (Tobita, J. Polym. Sci. B 2001), which are consistent with the analytical information. We calculate the linear rheology of the LDPE ensemble with the use of a tube model for branched polymers (Das et al., J. Rheol. 2006). We then, separate the contribution of the stress decay to a large number of pompom modes (McLeish et al., J. Rheol. 1998 and Inkson et al., J. Rheol. 1999) with the stretch time and the priority variables corresponding to the actual ensemble of molecules involved. This multimode pompom model allows us to predict the nonlinear properties without any fitting parameter. We present and analyze our results in comparison with experimental data on industrial materials.« less

  14. Effectiveness of concept mapping and traditional linear nursing care plans on critical thinking skills in clinical pediatric nursing course.

    PubMed

    Aein, Fereshteh; Aliakbari, Fatemeh

    2017-01-01

    Concept map is a useful cognitive tool for enhancing a student's critical thinking (CT) by encouraging students to process information deeply for understanding. However, the evidence regarding its effectiveness on nursing students' CT is contradictory. This paper compares the effectiveness of concept mapping and traditional linear nursing care planning on students' CT. An experimental design was used to examine the CT of 60 baccalaureate students who participated in pediatric clinical nursing course in the Shahrekord University of Medical Sciences, Shahrekord, Iran in 2013. Participants were randomly divided into six equal groups of each 10 student, of which three groups were the control group, and the others were the experimental group. The control group completed nine traditional linear nursing care plans, whereas experimental group completed nine concept maps during the course. Both groups showed significant improvement in overall and all subscales of the California CT skill test from pretest to posttest ( P < 0.001), but t -test demonstrated that improvement in students' CT skills in the experimental group was significantly greater than in the control group after the program ( P < 0.001). Our findings support that concept mapping can be used as a clinical teaching-learning activity to promote CT in nursing students.

  15. Effectiveness of concept mapping and traditional linear nursing care plans on critical thinking skills in clinical pediatric nursing course

    PubMed Central

    Aein, Fereshteh; Aliakbari, Fatemeh

    2017-01-01

    Introduction: Concept map is a useful cognitive tool for enhancing a student's critical thinking (CT) by encouraging students to process information deeply for understanding. However, the evidence regarding its effectiveness on nursing students’ CT is contradictory. This paper compares the effectiveness of concept mapping and traditional linear nursing care planning on students’ CT. Methods: An experimental design was used to examine the CT of 60 baccalaureate students who participated in pediatric clinical nursing course in the Shahrekord University of Medical Sciences, Shahrekord, Iran in 2013. Results: Participants were randomly divided into six equal groups of each 10 student, of which three groups were the control group, and the others were the experimental group. The control group completed nine traditional linear nursing care plans, whereas experimental group completed nine concept maps during the course. Both groups showed significant improvement in overall and all subscales of the California CT skill test from pretest to posttest (P < 0.001), but t-test demonstrated that improvement in students’ CT skills in the experimental group was significantly greater than in the control group after the program (P < 0.001). Conclusions: Our findings support that concept mapping can be used as a clinical teaching-learning activity to promote CT in nursing students. PMID:28546978

  16. Trends in Timing of Pregnancy Awareness Among US Women.

    PubMed

    Branum, Amy M; Ahrens, Katherine A

    2017-04-01

    Objectives Early pregnancy detection is important for improving pregnancy outcomes as the first trimester is a critical window of fetal development; however, there has been no description of trends in timing of pregnancy awareness among US women. Methods We examined data from the 1995, 2002, 2006-2010 and 2011-2013 National Survey of Family Growth on self-reported timing of pregnancy awareness among women aged 15-44 years who reported at least one pregnancy in the 4 or 5 years prior to interview that did not result in induced abortion or adoption (n = 17, 406). We examined the associations between maternal characteristics and late pregnancy awareness (≥7 weeks' gestation) using adjusted prevalence ratios from logistic regression models. Gestational age at time of pregnancy awareness (continuous) was regressed over year of pregnancy conception (1990-2012) in a linear model. Results Among all pregnancies reported, gestational age at time of pregnancy awareness was 5.5 weeks (standard error = 0.04) and the prevalence of late pregnancy awareness was 23 % (standard error = 1 %). Late pregnancy awareness decreased with maternal age, was more prevalent among non-Hispanic black and Hispanic women compared to non-Hispanic white women, and for unintended pregnancies versus those that were intended (p < 0.01). Mean time of pregnancy awareness did not change linearly over a 23-year time period after adjustment for maternal age at the time of conception (p < 0.16). Conclusions for Practice On average, timing of pregnancy awareness did not change linearly during 1990-2012 among US women and occurs later among certain groups of women who are at higher risk of adverse birth outcomes.

  17. Trends in Timing of Pregnancy Awareness Among US Women

    PubMed Central

    2017-01-01

    Objectives Early pregnancy detection is important for improving pregnancy outcomes as the first trimester is a critical window of fetal development; however, there has been no description of trends in timing of pregnancy awareness among US women. Methods We examined data from the 1995, 2002, 2006–2010 and 2011–2013 National Survey of Family Growth on self-reported timing of pregnancy awareness among women aged 15–44 years who reported at least one pregnancy in the 4 or 5 years prior to interview that did not result in induced abortion or adoption (n = 17, 406). We examined the associations between maternal characteristics and late pregnancy awareness (≥7 weeks’ gestation) using adjusted prevalence ratios from logistic regression models. Gestational age at time of pregnancy awareness (continuous) was regressed over year of pregnancy conception (1990–2012) in a linear model. Results Among all pregnancies reported, gestational age at time of pregnancy awareness was 5.5 weeks (standard error = 0.04) and the prevalence of late pregnancy awareness was 23 % (standard error = 1 %). Late pregnancy awareness decreased with maternal age, was more prevalent among non-Hispanic black and Hispanic women compared to non-Hispanic white women, and for unintended pregnancies versus those that were intended (p < 0.01). Mean time of pregnancy awareness did not change linearly over a 23-year time period after adjustment for maternal age at the time of conception (p < 0.16). Conclusions for Practice On average, timing of pregnancy awareness did not change linearly during 1990–2012 among US women and occurs later among certain groups of women who are at higher risk of adverse birth outcomes. PMID:27449777

  18. The conceptual basis of mathematics in cardiology III: linear systems theory and integral transforms.

    PubMed

    Bates, Jason H T; Sobel, Burton E

    2003-05-01

    This is the third in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas.This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.

  19. A Simple Piece of Apparatus to Aid the Understanding of the Relationship between Angular Velocity and Linear Velocity

    ERIC Educational Resources Information Center

    Unsal, Yasin

    2011-01-01

    One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…

  20. An Inquiry-Oriented Approach to Span and Linear Independence: The Case of the Magic Carpet Ride Sequence

    ERIC Educational Resources Information Center

    Wawro, Megan; Rasmussen, Chris; Zandieh, Michelle; Sweeney, George Franklin; Larson, Christine

    2012-01-01

    In this paper we present an innovative instructional sequence for an introductory linear algebra course that supports students' reinvention of the concepts of span, linear dependence, and linear independence. Referred to as the Magic Carpet Ride sequence, the problems begin with an imaginary scenario that allows students to build rich imagery and…

  1. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Sain, M. K.; Yurkovich, S.; Hill, J. P.; Kingler, T. A.

    1983-01-01

    The development of models of tensor type for a digital simulation of the quiet, clean safe engine (QCSE) gas turbine engine; the extension, to nonlinear multivariate control system design, of the concepts of total synthesis which trace their roots back to certain early investigations under this grant; the role of series descriptions as they relate to questions of scheduling in the control of gas turbine engines; the development of computer-aided design software for tensor modeling calculations; further enhancement of the softwares for linear total synthesis, mentioned above; and calculation of the first known examples using tensors for nonlinear feedback control are discussed.

  2. The dynamics and control of large flexible space structures, 2. Part A: Shape and orientation control using point actuators

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Reddy, A. S. S. R.

    1979-01-01

    The equations of planar motion for a flexible beam in orbit which includes the effects of gravity gradient torques and control torques from point actuators located along the beam was developed. Two classes of theorems are applied to the linearized form of these equations to establish necessary conditions for controlability for preselected actuator configurations. The feedback gains are selected: (1) based on the decoupling of the original coordinates and to obtain proper damping, and (2) by applying the linear regulator problem to the individual model coordinates separately. The linear control laws obtained using both techniques were evaluated by numerical integration of the nonlinear system equations. Numerical examples considering pitch and various number of modes with different combination of actuator numbers and locations are presented. The independent model control concept used earlier with a discretized model of the thin beam in orbit was reviewed for the case where the number of actuators is less than the number of modes. Results indicate that although the system is controllable it is not stable about the nominal (local vertical) orientation when the control is based on modal decoupling. An alternate control law not based on modal decoupling ensures stability of all the modes.

  3. A thermo-elastoplastic model for soft rocks considering structure

    NASA Astrophysics Data System (ADS)

    He, Zuoyue; Zhang, Sheng; Teng, Jidong; Xiong, Yonglin

    2017-11-01

    In the fields of nuclear waste geological deposit, geothermy and deep mining, the effects of temperature on the mechanical behaviors of soft rocks cannot be neglected. Experimental data in the literature also showed that the structure of soft rocks cannot be ignored. Based on the superloading yield surface and the concept of temperature-deduced equivalent stress, a thermo-elastoplastic model for soft rocks is proposed considering the structure. Compared to the superloading yield surface, only one parameter is added, i.e. the linear thermal expansion coefficient. The predicted results and the comparisons with experimental data in the literature show that the proposed model is capable of simultaneously describing heat increase and heat decrease of soft rocks. A stronger initial structure leads to a greater strength of the soft rocks. Heat increase and heat decrease can be converted between each other due to the change of the initial structure of soft rocks. Furthermore, regardless of the heat increase or heat decrease, a larger linear thermal expansion coefficient or a greater temperature always leads to a much rapider degradation of the structure. The degradation trend will be more obvious for the coupled greater values of linear thermal expansion coefficient and temperature. Lastly, compared to heat decrease, the structure will degrade more easily in the case of heat increase.

  4. Teaching Linear Algebra: Must the Fog Always Roll In?

    ERIC Educational Resources Information Center

    Carlson, David

    1993-01-01

    Proposes methods to teach the more difficult concepts of linear algebra. Examines features of the Linear Algebra Curriculum Study Group Core Syllabus, and presents problems from the core syllabus that utilize the mathematical process skills of making conjectures, proving the results, and communicating the results to colleagues. Presents five…

  5. Transit-time and age distributions for nonlinear time-dependent compartmental systems.

    PubMed

    Metzler, Holger; Müller, Markus; Sierra, Carlos A

    2018-02-06

    Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.

  6. Dose Titration Algorithm Tuning (DTAT) should supersede ‘the’ Maximum Tolerated Dose (MTD) in oncology dose-finding trials

    PubMed Central

    Norris, David C.

    2017-01-01

    Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent ‘confirmatory’ Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as ‘dose-finding’, but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug’s population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace ‘the’ MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents. PMID:28663782

  7. Link between the dielectric properties of mesomorphic and biological materials

    NASA Astrophysics Data System (ADS)

    Szwajczak, Elzbieta; Szymanski, Aleksander B.

    2002-06-01

    An application of liquid crystalline materials as a model materials for the use in dielectric spectroscopy of the artificial biological materials and the tissues is discussed. It is shown that an application of the standard electrochemical concepts may break in the case of liquid crystalline materials as well as biological materials. The presence of space charge regions as well as electrical non- linearities of the sample may suggest some special possibility of the time domain technique application.

  8. Single point dilution method for the quantitative analysis of antibodies to the gag24 protein of HIV-1.

    PubMed

    Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O

    1997-10-13

    In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.

  9. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 1. The Russell-Muller debate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper assesses the discovery of the dose-rate effect in radiation genetics and how it challenged fundamental tenets of the linear non-threshold (LNT) dose response model, including the assumptions that all mutational damage is cumulative and irreversible and that the dose-response is linear at low doses. Newly uncovered historical information also describes how a key 1964 report by the International Commission for Radiological Protection (ICRP) addressed the effects of dose rate in the assessment of genetic risk. This unique story involves assessments by two leading radiation geneticists, Hermann J. Muller and William L. Russell, who independently argued that the report'smore » Genetic Summary Section on dose rate was incorrect while simultaneously offering vastly different views as to what the report's summary should have contained. This paper reveals occurrences of scientific disagreements, how conflicts were resolved, which view(s) prevailed and why. During this process the Nobel Laureate, Muller, provided incorrect information to the ICRP in what appears to have been an attempt to manipulate the decision-making process and to prevent the dose-rate concept from being adopted into risk assessment practices. - Highlights: • The discovery of radiation dose rate challenged the scientific basis of LNT. • Radiation dose rate occurred in males and females. • The dose rate concept supported a threshold dose-response for radiation.« less

  10. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  11. Design and analysis of a low-loss linear analog phase modulator for deep space spacecraft X-band transponder applications

    NASA Technical Reports Server (NTRS)

    Mysoor, N. R.; Mueller, R. O.

    1991-01-01

    This article summarizes the design concepts, analyses, and development of an X-band (8145 MHz) transponder low-loss linear phase modulator for deep space spacecraft applications. A single-section breadboard circulator-coupled reflection phase modulator has been analyzed, fabricated, and evaluated. A linear phase deviation of 92 deg with a linearity tolerance of +/- 8 percent was measured for this modulator from 8257 MHz to 8634 MHz over the temperature range -20 to 75 C. The measured insertion loss and the static delay variation with temperature were 2 +/- 0.3 dB and 0.16 psec/ C, respectively. Based on this design, cascaded sections have been modeled, and simulations were performed to provide an X-band deep space transponder (DST) phase modulator with +/- 2.5 radians (+/- 143 deg) of peak phase deviation to accommodate downlink signal modulation with composite telemetry data and ranging, with a deviation linearity tolerance of +/- 8 percent and insertion loss of less than 10 +/- 0.5 dB. A two-section phase modulator using constant gamma hyperabrupt varactors and an efficient modulator driver circuit was breadboarded. The measured results satisfy the DST phase-modulator requirements and show excellent agreement with the predicted results.

  12. Guided Discovery, Visualization, and Technology Applied to the New Curriculum for Secondary Mathematics.

    ERIC Educational Resources Information Center

    Smith, Karan B.

    1996-01-01

    Presents activities which highlight major concepts of linear programming. Demonstrates how technology allows students to solve linear programming problems using exploration prior to learning algorithmic methods. (DDR)

  13. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    PubMed

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  14. [Medical expert assessment, objectivity and justice in disability pension cases. ].

    PubMed

    Solli, Hans Magnus

    2003-08-14

    The formal principle of justice is often interpreted as the requirement of objectivity when a person's situation is to be evaluated in the light of social justice. The aim of this article is to analyse whether or not the formal principle of justice is fulfilled by the ontological and the epistemological concept of objectivity when disability claims are evaluated medically in relation to the Norwegian legislation on disability benefits. material is legal and medical texts about medical disability evaluation. The method is text analysis. The main result is that the concept of ontological objectivity functions as the criterion of objectivity when the causal relations between sickness, impairment and disability are explained. This criterion is, however, problematic because it is based on the assumption that there is a linear causal model of these relations, which precludes the explanation of many cases of disability. The ontological concept of objectivity is not a necessary condition for impartiality and formal justice in relation to the causal relation between sickness and disability. In some situations this concept is a sufficient condition. The epistemological concept of objectivity is a sufficient condition, but it is not a necessary condition. Some cases must be reviewed on a discretionary basis.

  15. Theoretical foundations of apparent-damping phenomena and nearly irreversible energy exchange in linear conservative systems.

    PubMed

    Carcaterra, A; Akay, A

    2007-04-01

    This paper discusses a class of unexpected irreversible phenomena that can develop in linear conservative systems and provides a theoretical foundation that explains the underlying principles. Recent studies have shown that energy can be introduced to a linear system with near irreversibility, or energy within a system can migrate to a subsystem nearly irreversibly, even in the absence of dissipation, provided that the system has a particular natural frequency distribution. The present work introduces a general theory that provides a mathematical foundation and a physical explanation for the near irreversibility phenomena observed and reported in previous publications. Inspired by the properties of probability distribution functions, the general formulation developed here is based on particular properties of harmonic series, which form the common basis of linear dynamic system models. The results demonstrate the existence of a special class of linear nondissipative dynamic systems that exhibit nearly irreversible energy exchange and possess a decaying impulse response. In addition to uncovering a new class of dynamic system properties, the results have far-reaching implications in engineering applications where classical vibration damping or absorption techniques may not be effective. Furthermore, the results also support the notion of nearly irreversible energy transfer in conservative linear systems, which until now has been a concept associated exclusively with nonlinear systems.

  16. Design tradeoffs for a Multispectral Linear Array (MLA) instrument

    NASA Technical Reports Server (NTRS)

    Mika, A. M.

    1982-01-01

    The heart of the multispectral linear array (MLA) design problem is to develop an instrument concept which concurrently provides a wide field-of-view with high resolution, spectral separation with precise band-to band registration, and excellent radiometric accuracy. Often, these requirements have conflicting design implications which can only be resolved by careful tradeoffs that consider performance, cost, fabrication feasibility and development risk. The key design tradeoffs for an MLA instrument are addressed, and elements of a baseline instrument concept are presented.

  17. Those Do What? Connecting Eigenvectors and Eigenvalues to the Rest of Linear Algebra: Using Visual Enhancements to Help Students Connect Eigenvectors to the Rest of Linear Algebra

    ERIC Educational Resources Information Center

    Nyman, Melvin A.; Lapp, Douglas A.; St. John, Dennis; Berry, John S.

    2010-01-01

    This paper discusses student difficulties in grasping concepts from Linear Algebra--in particular, the connection of eigenvalues and eigenvectors to other important topics in linear algebra. Based on our prior observations from student interviews, we propose technology-enhanced instructional approaches that might positively impact student…

  18. Brain-heart linear and nonlinear dynamics during visual emotional elicitation in healthy subjects.

    PubMed

    Valenza, G; Greco, A; Gentili, C; Lanata, A; Toschi, N; Barbieri, R; Sebastiani, L; Menicucci, D; Gemignani, A; Scilingo, E P

    2016-08-01

    This study investigates brain-heart dynamics during visual emotional elicitation in healthy subjects through linear and nonlinear coupling measures of EEG spectrogram and instantaneous heart rate estimates. To this extent, affective pictures including different combinations of arousal and valence levels, gathered from the International Affective Picture System, were administered to twenty-two healthy subjects. Time-varying maps of cortical activation were obtained through EEG spectral analysis, whereas the associated instantaneous heartbeat dynamics was estimated using inhomogeneous point-process linear models. Brain-Heart linear and nonlinear coupling was estimated through the Maximal Information Coefficient (MIC), considering EEG time-varying spectra and point-process estimates defined in the time and frequency domains. As a proof of concept, we here show preliminary results considering EEG oscillations in the θ band (4-8 Hz). This band, indeed, is known in the literature to be involved in emotional processes. MIC highlighted significant arousal-dependent changes, mediated by the prefrontal cortex interplay especially occurring at intermediate arousing levels. Furthermore, lower and higher arousing elicitations were associated to not significant brain-heart coupling changes in response to pleasant/unpleasant elicitations.

  19. An analysis of a large dataset on immigrant integration in Spain. The Statistical Mechanics perspective on Social Action

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-02-01

    How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.

  20. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  1. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  2. Using a matrix-analytical approach to synthesizing evidence solved incompatibility problem in the hierarchy of evidence.

    PubMed

    Walach, Harald; Loef, Martin

    2015-11-01

    The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. An experimental study of the validity of the heat-field concept for sonic-boom alleviation

    NASA Technical Reports Server (NTRS)

    Swigart, R. J.

    1974-01-01

    An experimental program was carried out in the NASA-Langley 4 ft x 4 ft supersonic pressure tunnel to investigate the validity of the heat-field concept for sonic boom alleviation. The concept involves heating the flow about a supersonic aircraft in such a manner as to obtain an increase in effective aircraft length and yield an effective aircraft shape that will result in a shock-free pressure signature on the ground. First, a basic body-of-revolution representing an SST configuration with its lift equivalence in volume was tested to provide a baseline pressure signature. Second, a model having a 5/2-power area distribution which, according to theory, should yield a linear pressure rise with no front shock wave was tested. Third, the concept of providing the 5/2-power area distribution by using an off-axis slender fin below the basic body was investigated. Then a substantial portion (approximately 40 percent) of the solid fin was replaced by a heat field generated by passing heated nitrogen through the rear of the fin.

  4. Development of a Low Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.

  5. Development of a Low-Inductance Linear Alternator for Stirling Power Convertors

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Schifer, Nicholas A.

    2017-01-01

    The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.

  6. Structural stability of nonlinear population dynamics.

    PubMed

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  7. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  8. Structural stability of nonlinear population dynamics

    NASA Astrophysics Data System (ADS)

    Cenci, Simone; Saavedra, Serguei

    2018-01-01

    In population dynamics, the concept of structural stability has been used to quantify the tolerance of a system to environmental perturbations. Yet, measuring the structural stability of nonlinear dynamical systems remains a challenging task. Focusing on the classic Lotka-Volterra dynamics, because of the linearity of the functional response, it has been possible to measure the conditions compatible with a structurally stable system. However, the functional response of biological communities is not always well approximated by deterministic linear functions. Thus, it is unclear the extent to which this linear approach can be generalized to other population dynamics models. Here, we show that the same approach used to investigate the classic Lotka-Volterra dynamics, which is called the structural approach, can be applied to a much larger class of nonlinear models. This class covers a large number of nonlinear functional responses that have been intensively investigated both theoretically and experimentally. We also investigate the applicability of the structural approach to stochastic dynamical systems and we provide a measure of structural stability for finite populations. Overall, we show that the structural approach can provide reliable and tractable information about the qualitative behavior of many nonlinear dynamical systems.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  10. A unified view of convective transports by stratocumulus clouds, shallow cumulus clouds, and deep convection

    NASA Technical Reports Server (NTRS)

    Randall, David A.

    1990-01-01

    A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.

  11. UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection

    PubMed Central

    Sadeque, Farig; Xu, Dongfang; Bethard, Steven

    2017-01-01

    The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167

  12. Design and fabrication of a hybrid maglev model employing PML and SML

    NASA Astrophysics Data System (ADS)

    Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.

    2017-10-01

    A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.

  13. Experimental and numerical simulation of passive decay heat removal by sump cooling after cool melt down

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knebel, J.U.; Kuhn, D.; Mueller, U.

    1997-12-01

    This article presents the basic physical phenomena and scaling criteria of passive decay heat removal from a large coolant pool by single-phase and two-phase natural circulation. The physical significance of the dimensionless similarity groups derived is evaluated. The above results are applied to the SUCO program that is performed at the Forschungszentrum Karlsruhe. The SUCO program is a three-step series of scaled model experiments investigating the possibility of a sump cooling concept for future light water reactors. The sump cooling concept is based on passive safety features within the containment. The work is supported by the German utilities and themore » Siemens AG. The article gives results of temperature and velocity measurements in the 1:20 linearly scaled SUCOS-2D test facility. The experiments are backed up by numerical calculations using the commercial software package Fluent. Finally, using the similarity analysis from above, the experimental results of the model geometry are scaled-up to the conditions in the prototype, allowing a first statement with regard to the feasibility of the sump cooling concept. 11 refs., 9 figs., 3 tabs.« less

  14. Preliminary Assessment of Optimal Longitudinal-Mode Control for Drag Reduction through Distributed Aeroelastic Shaping

    NASA Technical Reports Server (NTRS)

    Ippolito, Corey; Nguyen, Nhan; Lohn, Jason; Dolan, John

    2014-01-01

    The emergence of advanced lightweight materials is resulting in a new generation of lighter, flexible, more-efficient airframes that are enabling concepts for active aeroelastic wing-shape control to achieve greater flight efficiency and increased safety margins. These elastically shaped aircraft concepts require non-traditional methods for large-scale multi-objective flight control that simultaneously seek to gain aerodynamic efficiency in terms of drag reduction while performing traditional command-tracking tasks as part of a complete guidance and navigation solution. This paper presents results from a preliminary study of a notional multi-objective control law for an aeroelastic flexible-wing aircraft controlled through distributed continuous leading and trailing edge control surface actuators. This preliminary study develops and analyzes a multi-objective control law derived from optimal linear quadratic methods on a longitudinal vehicle dynamics model with coupled aeroelastic dynamics. The controller tracks commanded attack-angle while minimizing drag and controlling wing twist and bend. This paper presents an overview of the elastic aircraft concept, outlines the coupled vehicle model, presents the preliminary control law formulation and implementation, presents results from simulation, provides analysis, and concludes by identifying possible future areas for research

  15. Alignment of the Stanford Linear Collider Arcs: Concepts and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitthan, R.; Bell, B.; Friedsam, H.

    1987-02-01

    The alignment of the Arcs for the Stanford Linear Collider at SLAC has posed problems in accelerator survey and alignment not encountered before. These problems come less from the tight tolerances of 0.1 mm, although reaching such a tight statistically defined accuracy in a controlled manner is difficult enough, but from the absence of a common reference plane for the Arcs. Traditional circular accelerators, including HERA and LEP, have been designed in one plane referenced to local gravity. For the SLC Arcs no such single plane exists. Methods and concepts developed to solve these and other problems, connected with themore » unique design of SLC, range from the first use of satellites for accelerator alignment, use of electronic laser theodolites for placement of components, computer control of the manual adjustment process, complete automation of the data flow incorporating the most advanced concepts of geodesy, strict separation of survey and alignment, to linear principal component analysis for the final statistical smoothing of the mechanical components.« less

  16. Critical load: a novel approach to determining a sustainable intensity during resistance exercise.

    PubMed

    Arakelian, Vivian M; Mendes, Renata G; Trimer, Renata; Rossi Caruso, Flavia C; de Sousa, Nuno M; Borges, Vanessa C; do Valle Gomes Gatto, Camila; Baldissera, Vilmar; Arena, Ross; Borghi-Silva, Audrey

    2017-05-01

    A hyperbolic function as well as a linear relationship between power output and time to exhaustion (Tlim) has been consistently observed during dynamic non-resistive exercises. However, little is known about its concept to resistance exercises (RE), which could be defined as critical load (CL). This study aimed to verify the existence of CL during dynamic RE and to verify the number of workbouts necessary to determine the optimal modeling to achieve it. Fifteen healthy men (23±2.5 yrs) completed 1 repetition maximum test (1RM) on a leg press and 3 (60%, 75% and 90% of 1RM) or 4 (+ 30% of 1RM) workbouts protocols to obtain the CL by hyperbolic and linear regression models between Tlim and load performed. Blood lactate and leg fatigue were also measured. CL was obtained during RE and 3 workbouts protocol estimate it at 53% while 4 tests at 38% of 1 RM. However, based on coefficients of determination, 3 protocols provided a better fit than the 4-parameter model, respectively (R2>0.95 vs. >0.77). Moreover, all intensities increased blood lactate and leg fatigue, however, when corrected by Tlim, were significantly lower at CL. It was possible to determinate CL during dynamic lower limbs RE and that 3 exhaustive workbouts can be used to better estimate the CL, constituting a new concept of determining this threshold during dynamic RE and reducing the physically demanding nature of the protocol. These findings may have important applications for functional performance evaluation and prescription of RE programs.

  17. The knowledge-value chain: A conceptual framework for knowledge translation in health.

    PubMed

    Landry, Réjean; Amara, Nabil; Pablos-Mendes, Ariel; Shademani, Ramesh; Gold, Irving

    2006-08-01

    This article briefly discusses knowledge translation and lists the problems associated with it. Then it uses knowledge-management literature to develop and propose a knowledge-value chain framework in order to provide an integrated conceptual model of knowledge management and application in public health organizations. The knowledge-value chain is a non-linear concept and is based on the management of five dyadic capabilities: mapping and acquisition, creation and destruction, integration and sharing/transfer, replication and protection, and performance and innovation.

  18. Multi-stage rescheduling of generation, load shedding and short-term transmission capacity for emergency state control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krogh, B.; Chow, J.H.; Javid, H.S.

    1983-05-01

    A multi-stage formulation of the problem of scheduling generation, load shedding and short term transmission capacity for the alleviation of a viability emergency is presented. The formulation includes generation rate of change constraints, a linear network solution, and a model of the short term thermal overload capacity of transmission lines. The concept of rotating transmission line overloads for emergency state control is developed. The ideas are illustrated by a numerical example.

  19. Automating spectral unmixing of AVIRIS data using convex geometry concepts

    NASA Technical Reports Server (NTRS)

    Boardman, Joseph W.

    1993-01-01

    Spectral mixture analysis, or unmixing, has proven to be a useful tool in the semi-quantitative interpretation of AVIRIS data. Using a linear mixing model and a set of hypothesized endmember spectra, unmixing seeks to estimate the fractional abundance patterns of the various materials occurring within the imaged area. However, the validity and accuracy of the unmixing rest heavily on the 'user-supplied' set of endmember spectra. Current methods for emdmember determination are the weak link in the unmixing chain.

  20. The knowledge-value chain: A conceptual framework for knowledge translation in health.

    PubMed Central

    Landry, Réjean; Amara, Nabil; Pablos-Mendes, Ariel; Shademani, Ramesh; Gold, Irving

    2006-01-01

    This article briefly discusses knowledge translation and lists the problems associated with it. Then it uses knowledge-management literature to develop and propose a knowledge-value chain framework in order to provide an integrated conceptual model of knowledge management and application in public health organizations. The knowledge-value chain is a non-linear concept and is based on the management of five dyadic capabilities: mapping and acquisition, creation and destruction, integration and sharing/transfer, replication and protection, and performance and innovation. PMID:16917645

  1. RF pulse shape control in the compact linear collider test facility

    NASA Astrophysics Data System (ADS)

    Kononenko, Oleksiy; Corsini, Roberto

    2018-07-01

    The Compact Linear Collider (CLIC) is a study for an electron-positron machine aiming at accelerating and colliding particles at the next energy frontier. The CLIC concept is based on the novel two-beam acceleration scheme, where a high-current low-energy drive beam generates RF in series of power extraction and transfer structures accelerating the low-current main beam. To compensate for the transient beam-loading and meet the energy spread specification requirements for the main linac, the RF pulse shape must be carefully optimized. This was recently modelled by varying the drive beam phase switch times in the sub-harmonic buncher so that, when combined, the drive beam modulation translates into the required voltage modulation of the accelerating pulse. In this paper, the control over the RF pulse shape with the phase switches, that is crucial for the success of the developed compensation model, is studied. The results on the experimental verification of this control method are presented and a good agreement with the numerical predictions is demonstrated. Implications for the CLIC beam-loading compensation model are also discussed.

  2. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity.

    PubMed

    Liu, Feng; Walters, Stephen J; Julious, Steven A

    2017-10-02

    It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log linear shape curve under fixed dose allocation, no adaptive allocation, half adaptive and adaptive scenarios. The bias though is significantly increased for the Emax model if the true dose response follows a U-shaped curve. In most cases the Bayesian Emax model works effectively and efficiently, with low bias and good probability of success in case of monotonic dose response. However, if there is a belief that the dose response could be non-monotonic then the NDLM is the superior model to assess the dose response.

  3. An experiential mind-body approach to the management of medically unexplained symptoms.

    PubMed

    Bakal, D; Steiert, M; Coll, P; Schaefer, J

    2006-01-01

    This article outlines an experiential mind-body framework for understanding and treating patients with medically unexplained symptoms. The model relies on somatic awareness, a normal part of consciousness, to resolve the mind-body dualism inherent in conventional multidisciplinary approaches. Somatic awareness represents a guiding healing heuristic which allows for a linear treatment application of the biopsychosocial model. The heuristic acknowledges the validity of the patient's physical symptoms and identifies psychological and social factors needed for the healing process. Somatic awareness is used to direct changes in coping styles, illness beliefs, medication dependence and personal dynamics that are necessary to achieve symptom control. The mind-body concept is consistent with and supported by neurobiological models which draw on central nervous system mechanisms to explain medically unexplained symptoms. The concept is also supported by a recent hypothesis concerning the role peripheral connective tissue may play in influencing illness and well-being. Finally, somatic awareness is described as having potential to enhance understanding and conscious use of inner healing mechanisms at the basis of the placebo effect.

  4. Mathematical modeling of aeroelastic systems

    NASA Astrophysics Data System (ADS)

    Velmisov, Petr A.; Ankilov, Andrey V.; Semenova, Elizaveta P.

    2017-12-01

    In the paper, the stability of elastic elements of a class of designs that are in interaction with a gas or liquid flow is investigated. The definition of the stability of an elastic body corresponds to the concept of stability of dynamical systems by Lyapunov. As examples the mathematical models of flowing channels (models of vibration devices) at a subsonic flow and the mathematical models of protective surface at a supersonic flow are considered. Models are described by the related systems of the partial differential equations. An analytic investigation of stability is carried out on the basis of the construction of Lyapunov-type functionals, a numerical investigation is carried out on the basis of the Galerkin method. The various models of the gas-liquid environment (compressed, incompressible) and the various models of a deformable body (elastic linear and elastic nonlinear) are considered.

  5. Free electron lasers driven by linear induction accelerators: High power radiation sources

    NASA Technical Reports Server (NTRS)

    Orzechowski, T. J.

    1989-01-01

    The technology of Free Electron Lasers (FELs) and linear induction accelerators (LIAs) is addressed by outlining the following topics: fundamentals of FELs; basic concepts of linear induction accelerators; the Electron Laser Facility (a microwave FEL); PALADIN (an infrared FEL); magnetic switching; IMP; and future directions (relativistic klystrons). This presentation is represented by viewgraphs only.

  6. Student Learning of Basis, Span and Linear Independence in Linear Algebra

    ERIC Educational Resources Information Center

    Stewart, Sepideh; Thomas, Michael O. J.

    2010-01-01

    One of the earlier, more challenging concepts in linear algebra at university is that of basis. Students are often taught procedurally how to find a basis for a subspace using matrix manipulation, but may struggle with understanding the construct of basis, making further progress harder. We believe one reason for this is because students have…

  7. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays

    PubMed Central

    Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping

    2017-01-01

    The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system’s redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation. PMID:29156577

  8. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays.

    PubMed

    Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping

    2017-11-18

    The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system's redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation.

  9. A simulation model for the determination of tabarru' rate in a family takaful

    NASA Astrophysics Data System (ADS)

    Ismail, Hamizun bin

    2014-06-01

    The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.

  10. A fibre-coupled UHV-compatible variable angle reflection-absorption UV/visible spectrometer

    NASA Astrophysics Data System (ADS)

    Stubbing, J. W.; Salter, T. L.; Brown, W. A.; Taj, S.; McCoustra, M. R. S.

    2018-05-01

    We present a novel UV/visible reflection-absorption spectrometer for determining the refractive index, n, and thicknesses, d, of ice films. Knowledge of the refractive index of these films is of particular relevance to the astrochemical community, where they can be used to model radiative transfer and spectra of various regions of space. In order to make these models more accurate, values of n need to be recorded under astronomically relevant conditions, that is, under ultra-high vacuum (UHV) and cryogenic cooling. Several design considerations were taken into account to allow UHV compatibility combined with ease of use. The key design feature is a stainless steel rhombus coupled to an external linear drive (z-shift) allowing a variable reflection geometry to be achieved, which is necessary for our analysis. Test data for amorphous benzene ice are presented as a proof of concept, the film thickness, d, was found to vary linearly with surface exposure, and a value for n of 1.43 ± 0.07 was determined.

  11. PowerPoint and Concept Maps: A Great Double Act

    ERIC Educational Resources Information Center

    Simon, Jon

    2015-01-01

    This article explores how concept maps can provide a useful addition to PowerPoint slides to convey interconnections of knowledge and help students see how knowledge is often non-linear. While most accounting educators are familiar with PowerPoint, they are likely to be less familiar with concept maps and this article shows how the tool can be…

  12. Elementary properties of triangle in normed spaces

    NASA Astrophysics Data System (ADS)

    Triana, Deri; Yunus, Mahmud

    2018-03-01

    Based on concepts of trigonometric on plane, In this paper we generalized those concept in normed spaces. About the orthogonality concept between two vectors already well known, we are interested to develop elementary properties of triangle, especially the properties of its angle. We propose a non-linear (Wilson) functional to define an angle and explore its properties.

  13. Computational model for simulation of sequences of helicity and angular momentum transfer in turbid tissue-like scattering medium (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2017-02-01

    Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.

  14. Capacity planning for waste management systems: an interval fuzzy robust dynamic programming approach.

    PubMed

    Nie, Xianghui; Huang, Guo H; Li, Yongping

    2009-11-01

    This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.

  15. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  16. Concept and design of super junction devices

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Wentong; Qiao, Ming; Zhan, Zhenya; Li, Zhaoji

    2018-02-01

    The super junction (SJ) has been recognized as the " milestone” of the power MOSFET, which is the most important innovation concept of the voltage-sustaining layer (VSL). The basic structure of the SJ is a typical junction-type VSL (J-VSL) with the periodic N and P regions. However, the conventional VSL is a typical resistance-type VSL (R-VSL) with only an N or P region. It is a qualitative change of the VSL from the R-VSL to the J-VSL, introducing the bulk depletion to increase the doping concentration and optimize the bulk electric field of the SJ. This paper firstly summarizes the development of the SJ, and then the optimization theory of the SJ is discussed for both the vertical and the lateral devices, including the non-full depletion mode, the minimum specific on-resistance optimization method and the equivalent substrate model. The SJ concept breaks the conventional " silicon limit” relationship of R on∝V B 2.5, showing a quasi-linear relationship of R on∝V B 1.03.

  17. Knowing Inquiry as Practice and Theory: Developing a Pedagogical Framework with Elementary School Teachers

    NASA Astrophysics Data System (ADS)

    Poon, Chew-Leng; Lee, Yew-Jin; Tan, Aik-Ling; Lim, Shirley S. L.

    2012-04-01

    In this paper, we characterize the inquiry practices of four elementary school teachers by means of a pedagogical framework. Our study revealed core components of inquiry found in theoretically-driven models as well as practices that were regarded as integral to the success of day-to-day science teaching in Singapore. This approach towards describing actual science inquiry practices—a surprisingly neglected area—uncovered nuances in teacher instructions that can impact inquiry-based lessons as well as contribute to a practice-oriented perspective of science teaching. In particular, we found that these teachers attached importance to (a) preparing students for investigations, both cognitively and procedurally; (b) iterating pedagogical components where helping students understand and construct concepts did not follow a planned linear path but involved continuous monitoring of learning; and (c) synthesizing concepts in a consolidation phase. Our findings underscore the dialectical relationship between practice-oriented knowledge and theoretical conceptions of teaching/learning thereby helping educators better appreciate how teachers adapt inquiry science for different contexts.

  18. [Evaluation of a Self-Help Supported Counseling Concept for Children and Adolescents with Disproportional Short Stature].

    PubMed

    Rohenkohl, A C; Sommer, R; Kahrs, S; Bullinger, M; Klingebiel, K-H; Quitmann, J H

    2016-01-01

    Disproportionate short stature may impair the quality of life (QoL) of patients and their families. This study aimed to evaluate a self-help supported counseling concept to increase the QoL of the participants. QoL data from 58 children/adolescents (8-17 years) with a diagnosis of achondroplasia was collected at 2 measurement points during one year using the the QoLISSY questionnaire (self-/parental report). Differences before and after participation vs. non-participation in the intervention were evaluated using a linear mixed model. The longitudinal results show a greater increase of QoL in the active intervention group compared to a passive control group (p=0,005). The increase in the self-reported QoL of affected patients was significantly higher than for the parent-report (p=0,048). The study shows that patients with achondroplasia benefit from a self-help supported counseling concept. However, this should be tested in a randomized trial. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Protoplanetary disc `isochrones' and the evolution of discs in the M˙-Md plane

    NASA Astrophysics Data System (ADS)

    Lodato, Giuseppe; Scardoni, Chiara E.; Manara, Carlo F.; Testi, Leonardo

    2017-12-01

    In this paper, we compare simple viscous diffusion models for the disc evolution with the results of recent surveys of the properties of young protoplanetary discs. We introduce the useful concept of 'disc isochrones' in the accretion rate-disc mass plane and explore a set of Monte Carlo realization of disc initial conditions. We find that such simple viscous models can provide a remarkable agreement with the available data in the Lupus star forming region, with the key requirement that the average viscous evolutionary time-scale of the discs is comparable to the cluster age. Our models produce naturally a correlation between mass accretion rate and disc mass that is shallower than linear, contrary to previous results and in agreement with observations. We also predict that a linear correlation, with a tighter scatter, should be found for more evolved disc populations. Finally, we find that such viscous models can reproduce the observations in the Lupus region only in the assumption that the efficiency of angular momentum transport is a growing function of radius, thus putting interesting constraints on the nature of the microscopic processes that lead to disc accretion.

  20. A new modal superposition method for nonlinear vibration analysis of structures using hybrid mode shapes

    NASA Astrophysics Data System (ADS)

    Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat

    2018-07-01

    In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.

  1. Low-Loss Materials for Josephson Qubits

    DTIC Science & Technology

    2014-10-09

    quantum circuit. It also intuitively explains how for a linear circuit the standard results for electrical circuits are obtained, justifying the use of... linear concepts for a weakly non- linear device such as the transmon. It has also become common to use a double sided noise spectrum to represent...loss tangent of large area pad junction. (c) Effective linearized circuit for the double junction, which makes up the admittance $Y$. $L_j$ is the

  2. A study on nonlinear estimation of submaximal effort tolerance based on the generalized MET concept and the 6MWT in pulmonary rehabilitation

    PubMed Central

    Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek

    2018-01-01

    Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213

  3. A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.

    PubMed

    Araújo, Ricardo de A

    2012-04-01

    Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Modeling the Inactivation of Intestinal Pathogenic Escherichia coli O157:H7 and Uropathogenic E. coli in Ground Chicken by High Pressure Processing and Thymol

    PubMed Central

    Chien, Shih-Yung; Sheen, Shiowshuh; Sommers, Christopher H.; Sheen, Lee-Yan

    2016-01-01

    Disease causing Escherichia coli commonly found in meat and poultry include intestinal pathogenic E. coli (iPEC) as well as extraintestinal types such as the Uropathogenic E. coli (UPEC). In this study we compared the resistance of iPEC (O157:H7) to UPEC in chicken meat using High Pressure Processing (HPP) in with (the hurdle concept) and without thymol essential oil as a sensitizer. UPEC was found slightly more resistant than E. coli O157:H7 (iPEC O157:H7) at 450 and 500 MPa. A central composite experimental design was used to evaluate the effect of pressure (300–400 MPa), thymol concentration (100–200 ppm), and pressure-holding time (10–20 min) on the inactivation of iPEC O157:H7 and UPEC in ground chicken. The hurdle approach reduced the high pressure levels and thymol doses imposed on the food matrices and potentially decreased food quality damaged after treatment. The quadratic equations were developed to predict the impact (lethality) on iPEC O157:H7 (R2 = 0.94) and UPEC (R2 = 0.98), as well as dimensionless non-linear models [Pr > F (<0.0001)]. Both linear and non-linear models were validated with data obtained from separated experiment points. All models may predict the inactivation/lethality within the same order of accuracy. However, the dimensionless non-linear models showed potential applications with parameters outside the central composite design ranges. The results provide useful information of both iPEC O157:H7 and UPEC in regard to how they may survive HPP in the presence or absence of thymol. The models may further assist regulatory agencies and food industry to assess the potential risk of iPEC O157:H7 and UPEC in ground chicken. PMID:27379050

  5. Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling

    NASA Astrophysics Data System (ADS)

    Beil, C.; Kolbe, T. H.

    2017-10-01

    Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.

  6. Robust hopping based on virtual pendulum posture control.

    PubMed

    Sharbafi, Maziar A; Maufroy, Christophe; Ahmadabadi, Majid Nili; Yazdanpanah, Mohammad J; Seyfarth, Andre

    2013-09-01

    A new control approach to achieve robust hopping against perturbations in the sagittal plane is presented in this paper. In perturbed hopping, vertical body alignment has a significant role for stability. Our approach is based on the virtual pendulum concept, recently proposed, based on experimental findings in human and animal locomotion. In this concept, the ground reaction forces are pointed to a virtual support point, named virtual pivot point (VPP), during motion. This concept is employed in designing the controller to balance the trunk during the stance phase. New strategies for leg angle and length adjustment besides the virtual pendulum posture control are proposed as a unified controller. This method is investigated by applying it on an extension of the spring loaded inverted pendulum (SLIP) model. Trunk, leg mass and damping are added to the SLIP model in order to make the model more realistic. The stability is analyzed by Poincaré map analysis. With fixed VPP position, stability, disturbance rejection and moderate robustness are achieved, but with a low convergence speed. To improve the performance and attain higher robustness, an event-based control of the VPP position is introduced, using feedback of the system states at apexes. Discrete linear quartic regulator is used to design the feedback controller. Considerable enhancements with respect to stability, convergence speed and robustness against perturbations and parameter changes are achieved.

  7. Techniques for Single System Integration of Elastic Simulation Features

    NASA Astrophysics Data System (ADS)

    Mitchell, Nathan M.

    Techniques for simulating the behavior of elastic objects have matured considerably over the last several decades, tackling diverse problems from non-linear models for incompressibility to accurate self-collisions. Alongside these contributions, advances in parallel hardware design and algorithms have made simulation more efficient and affordable than ever before. However, prior research often has had to commit to design choices that compromise certain simulation features to better optimize others, resulting in a fragmented landscape of solutions. For complex, real-world tasks, such as virtual surgery, a holistic approach is desirable, where complex behavior, performance, and ease of modeling are supported equally. This dissertation caters to this goal in the form of several interconnected threads of investigation, each of which contributes a piece of an unified solution. First, it will be demonstrated how various non-linear materials can be combined with lattice deformers to yield simulations with behavioral richness and a high potential for parallelism. This potential will be exploited to show how a hybrid solver approach based on large macroblocks can accelerate the convergence of these deformers. Further extensions of the lattice concept with non-manifold topology will allow for efficient processing of self-collisions and topology change. Finally, these concepts will be explored in the context of a case study on virtual plastic surgery, demonstrating a real-world problem space where these ideas can be combined to build an expressive authoring tool, allowing surgeons to record procedures digitally for future reference or education.

  8. Constructive Learning in Undergraduate Linear Algebra

    ERIC Educational Resources Information Center

    Chandler, Farrah Jackson; Taylor, Dewey T.

    2008-01-01

    In this article we describe a project that we used in our undergraduate linear algebra courses to help our students successfully master fundamental concepts and definitions and generate interest in the course. We describe our philosophy and discuss the projects overall success.

  9. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  10. Constructive Development of the Solutions of Linear Equations in Introductory Ordinary Differential Equations

    ERIC Educational Resources Information Center

    Mallet, D. G.; McCue, S. W.

    2009-01-01

    The solution of linear ordinary differential equations (ODEs) is commonly taught in first-year undergraduate mathematics classrooms, but the understanding of the concept of a solution is not always grasped by students until much later. Recognizing what it is to be a solution of a linear ODE and how to postulate such solutions, without resorting to…

  11. The Storm and Stress (or Calm) of Early Adolescent Self-Concepts: Within- and Between-Person Variability

    PubMed Central

    Molloy, Lauren E.; Ram, Nilam; Gest, Scott D.

    2014-01-01

    This study uses intraindividual variability and change methods to test theoretical accounts of self-concept and its change across time and context, and the developmental implications of this variability. The five-year longitudinal study of 541 youth in a rural Pennsylvania community from 3rd through 7th grade included twice-yearly assessments of self-concept (academic and social), corresponding external evaluations of competence (e.g., teacher-rated academic skills, peer-nominated “likeability”), and multiple measures of youths' overall adjustment. Multiphase growth models replicate previous research, suggesting significant decline in academic self-concept during middle school, but modest growth in social self-concept from 3rd through 7th grade. Next, a new contribution is made to the literature by quantifying the amount of within-person variability (i.e., “lability”) around these linear self-concept trajectories as a between-person characteristic. Self-concept lability was found to associate with a general profile of poorer competence and adjustment, and to predict poorer academic and social competence at the end of 7th grade above and beyond level of self-concept. Finally, there was substantial evidence that wave-to-wave changes in youths' self-concepts correspond to teacher and peer evaluations of youths' competence, that attention to peer feedback may be particularly strong during middle school, and that these relations may be moderated by between-person indicators of youths' general adjustment. Overall, findings highlight the utility of methods sensitive to within-person variation for clarifying the dynamics of youths' self-system development. PMID:21928883

  12. Deep Drawing Simulations With Different Polycrystalline Models

    NASA Astrophysics Data System (ADS)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  13. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  14. Into the development of a model to assess beam shaping and polarization control effects on laser cutting

    NASA Astrophysics Data System (ADS)

    Rodrigues, Gonçalo C.; Duflou, Joost R.

    2018-02-01

    This paper offers an in-depth look into beam shaping and polarization control as two of the most promising techniques for improving industrial laser cutting of metal sheets. An assessment model is developed for the study of such effects. It is built upon several modifications to models as available in literature in order to evaluate the potential of a wide range of considered concepts. This includes different kinds of beam shaping (achieved by extra-cavity optical elements or asymmetric diode staking) and polarization control techniques (linear, cross, radial, azimuthal). A fully mathematical description and solution procedure are provided. Three case studies for direct diode lasers follow, containing both experimental data and parametric studies. In the first case study, linear polarization is analyzed for any given angle between the cutting direction and the electrical field. In the second case several polarization strategies are compared for similar cut conditions, evaluating, for example, the minimum number of spatial divisions of a segmented polarized laser beam to achieve a target performance. A novel strategy, based on a 12-division linear-to-radial polarization converter with an axis misalignment and capable of improving cutting efficiency with more than 60%, is proposed. The last case study reveals different insights in beam shaping techniques, with an example of a beam shape optimization path for a 30% improvement in cutting efficiency. The proposed techniques are not limited to this type of laser source, neither is the model dedicated to these specific case studies. Limitations of the model and opportunities are further discussed.

  15. Multidisciplinary Approach to Aerospike Nozzle Design

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional finite-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against separate aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single-discipline design strategy.

  16. Statistical mechanics of competitive resource allocation using agent-based models

    NASA Astrophysics Data System (ADS)

    Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.

    2015-01-01

    Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.

  17. Range bagging: a new method for ecological niche modelling from presence-only data

    PubMed Central

    Drake, John M.

    2015-01-01

    The ecological niche is the set of environments in which a population of a species can persist without introduction of individuals from other locations. A good mathematical or computational representation of the niche is a prerequisite to addressing many questions in ecology, biogeography, evolutionary biology and conservation. A particularly challenging question for ecological niche modelling is the problem of presence-only modelling. That is, can an ecological niche be identified from records drawn only from the set of niche environments without records from non-niche environments for comparison? Here, I introduce a new method for ecological niche modelling from presence-only data called range bagging. Range bagging draws on the concept of a species' environmental range, but was inspired by the empirical performance of ensemble learning algorithms in other areas of ecological research. This paper extends the concept of environmental range to multiple dimensions and shows that range bagging is computationally feasible even when the number of environmental dimensions is large. The target of the range bagging base learner is an environmental tolerance of the species in a projection of its niche and is therefore an ecologically interpretable property of a species' biological requirements. The computational complexity of range bagging is linear in the number of examples, which compares favourably with the main alternative, Qhull. In conclusion, range bagging appears to be a reasonable choice for niche modelling in applications in which a presence-only method is desired and may provide a solution to problems in other disciplines where one-class classification is required, such as outlier detection and concept learning. PMID:25948612

  18. Range bagging: a new method for ecological niche modelling from presence-only data.

    PubMed

    Drake, John M

    2015-06-06

    The ecological niche is the set of environments in which a population of a species can persist without introduction of individuals from other locations. A good mathematical or computational representation of the niche is a prerequisite to addressing many questions in ecology, biogeography, evolutionary biology and conservation. A particularly challenging question for ecological niche modelling is the problem of presence-only modelling. That is, can an ecological niche be identified from records drawn only from the set of niche environments without records from non-niche environments for comparison? Here, I introduce a new method for ecological niche modelling from presence-only data called range bagging. Range bagging draws on the concept of a species' environmental range, but was inspired by the empirical performance of ensemble learning algorithms in other areas of ecological research. This paper extends the concept of environmental range to multiple dimensions and shows that range bagging is computationally feasible even when the number of environmental dimensions is large. The target of the range bagging base learner is an environmental tolerance of the species in a projection of its niche and is therefore an ecologically interpretable property of a species' biological requirements. The computational complexity of range bagging is linear in the number of examples, which compares favourably with the main alternative, Qhull. In conclusion, range bagging appears to be a reasonable choice for niche modelling in applications in which a presence-only method is desired and may provide a solution to problems in other disciplines where one-class classification is required, such as outlier detection and concept learning.

  19. Teaching undergraduate biomechanics with Just-in-Time Teaching.

    PubMed

    Riskowski, Jody L

    2015-06-01

    Biomechanics education is a vital component of kinesiology, sports medicine, and physical education, as well as for many biomedical engineering and bioengineering undergraduate programmes. Little research exists regarding effective teaching strategies for biomechanics. However, prior work suggests that student learning in undergraduate physics courses has been aided by using the Just-in-Time Teaching (JiTT). As physics understanding plays a role in biomechanics understanding, the purpose of study was to evaluate the use of a JiTT framework in an undergraduate biomechanics course. This two-year action-based research study evaluated three JiTT frameworks: (1) no JiTT; (2) mathematics-based JiTT; and (3) concept-based JiTT. A pre- and post-course assessment of student learning used the biomechanics concept inventory and a biomechanics concept map. A general linear model assessed differences between the course assessments by JiTT framework in order to evaluate learning and teaching effectiveness. The results indicated significantly higher learning gains and better conceptual understanding in a concept-based JiTT course, relative to a mathematics-based JiTT or no JiTT course structure. These results suggest that a course structure involving concept-based questions using a JiTT strategy may be an effective method for engaging undergraduate students and promoting learning in biomechanics courses.

  20. Does chaos theory have major implications for philosophy of medicine?

    PubMed

    Holm, S

    2002-12-01

    In the literature it is sometimes claimed that chaos theory, non-linear dynamics, and the theory of fractals have major implications for philosophy of medicine, especially for our analysis of the concept of disease and the concept of causation. This paper gives a brief introduction to the concepts underlying chaos theory and non-linear dynamics. It is then shown that chaos theory has only very minimal implications for the analysis of the concept of disease and the concept of causation, mainly because the mathematics of chaotic processes entail that these processes are fully deterministic. The practical unpredictability of chaotic processes, caused by their extreme sensitivity to initial conditions, may raise practical problems in diagnosis, prognosis, and treatment, but it raises no major theoretical problems. The relation between chaos theory and the problem of free will is discussed, and it is shown that chaos theory may remove the problem of predictability of decisions, but does not solve the problem of free will. Chaos theory may thus be very important for our understanding of physiological processes, and specific disease entities, without having any major implications for philosophy of medicine.

  1. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  2. The effect of time-dependent macromolecular crowding on the kinetics of protein aggregation: a simple model for the onset of age-related neurodegenerative disease

    NASA Astrophysics Data System (ADS)

    Minton, Allen

    2014-08-01

    A linear increase in the concentration of "inert" macromolecules with time is incorporated into simple excluded volume models for protein condensation or fibrillation. Such models predict a long latent period during which no significant amount of protein aggregates, followed by a steep increase in the total amount of aggregate. The elapsed time at which these models predict half-conversion of model protein to aggregate varies by less than a factor of two when the intrinsic rate constant for condensation or fibril growth of the protein is varied over many orders of magnitude. It is suggested that this concept can explain why the symptoms of neurodegenerative diseases associated with the aggregation of very different proteins and peptides appear at approximately the same advanced age in humans.

  3. Modeling Limited Foresight in Water Management Systems

    NASA Astrophysics Data System (ADS)

    Howitt, R.

    2005-12-01

    The inability to forecast future water supplies means that their management inevitably occurs under situations of limited foresight. Three modeling problems arise, first what type of objective function is a manager with limited foresight optimizing? Second how can we measure these objectives? Third can objective functions that incorporate uncertainty be integrated within the structure of optimizing water management models? The paper reviews the concepts of relative risk aversion and intertemporal substitution that underlie stochastic dynamic preference functions. Some initial results from the estimation of such functions for four different dam operations in northern California are presented and discussed. It appears that the path of previous water decisions and states influences the decision-makers willingness to trade off water supplies between periods. A compromise modeling approach that incorporates carry-over value functions under limited foresight within a broader net work optimal water management model is developed. The approach uses annual carry-over value functions derived from small dimension stochastic dynamic programs embedded within a larger dimension water allocation network. The disaggregation of the carry-over value functions to the broader network is extended using the space rule concept. Initial results suggest that the solution of such annual nonlinear network optimizations is comparable to, or faster than, the solution of linear network problems over long time series.

  4. Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection

    PubMed Central

    Offman, Marc N; Tournier, Alexander L; Bates, Paul A

    2008-01-01

    Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557

  5. Chromosome structures: reduction of certain problems with unequal gene content and gene paralogs to integer linear programming.

    PubMed

    Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin

    2017-12-06

    Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.

  6. A Novel Blast-mitigation Concept for Light Tactical Vehicles

    DTIC Science & Technology

    2013-01-01

    analysis which utilizes the mass and energy (but not linear momentum ) conservation equations is provided. It should be noted that the identical final...results could be obtained using an analogous analysis which combines the mass and the linear momentum conservation equations. For a calorically...governing mass, linear momentum and energy conservation and heat conduction equations are solved within ABAQUS/ Explicit with a second-order accurate

  7. Individual and Collective Analyses of the Genesis of Student Reasoning Regarding the Invertible Matrix Theorem in Linear Algebra

    ERIC Educational Resources Information Center

    Wawro, Megan Jean

    2011-01-01

    In this study, I considered the development of mathematical meaning related to the Invertible Matrix Theorem (IMT) for both a classroom community and an individual student over time. In this particular linear algebra course, the IMT was a core theorem in that it connected many concepts fundamental to linear algebra through the notion of…

  8. Improved Linear-Ion-Trap Frequency Standard

    NASA Technical Reports Server (NTRS)

    Prestage, John D.

    1995-01-01

    Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.

  9. The Shock and Vibration Digest. Volume 18, Number 12

    DTIC Science & Technology

    1986-12-01

    practical msthods for fracture mechanics analysis. Linear elastic methods can yield useful results. Elas- dc-plasdc methods are becoming useful with...geometry factors. Fracture mechanics analysis based on linear elastic concepts developed in the 1960s has become established during the last decade as...2) is slightly conservative [2,3]. Materials that ran be treated with linear elastic fracture mechanics usually belong in this category. No

  10. An analogue conceptual rainfall-runoff model for educational purposes

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Riedl, Michael; Schulz, Karsten

    2016-04-01

    Conceptual rainfall-runoff models, in which runoff processes are modelled with a series of connected linear and non-linear reservoirs, remain widely applied tools in science and practice. Additionally, the concept is appreciated in teaching due to its somewhat simplicity in explaining and exploring hydrological processes of catchments. However, when a series of reservoirs are used, the model system becomes highly parametrized and complex and the traceability of the model results becomes more difficult to explain to an audience not accustomed to numerical modelling. Since normally the simulations are performed with a not visible digital code, the results are also not easily comprehensible. This contribution therefore presents a liquid analogue model, in which a conceptual rainfall-runoff model is reproduced by a physical model. This consists of different acrylic glass containers representing different storage components within a catchment, e.g. soil water or groundwater storage. The containers are equipped and connected with pipes, in which water movement represents different flow processes, e.g. surface runoff, percolation or base flow. Water from a storage container is pumped to the upper part of the model and represents effective rainfall input. The water then flows by gravity through the different pipes and storages. Valves are used for controlling the flows within the analogue model, comparable to the parameterization procedure in numerical models. Additionally, an inexpensive microcontroller-based board and sensors are used to measure storage water levels, with online visualization of the states as time series data, building a bridge between the analogue and digital world. The ability to physically witness the different flows and water levels in the storages makes the analogue model attractive to the audience. Hands-on experiments can be performed with students, in which different scenarios or catchment types can be simulated, not only with the analogue but also in parallel with the digital model, thereby connecting real-world with science. The effects of different parameterization setups, which is important not only in hydrological sciences, can be shown in a tangible way. The use of the analogue model in the context of "children meet University" events seems an attractive approach to show a younger audience the basic ideas of catchment modelling concepts, which would otherwise not be possible.

  11. Assessing the relative potency of (S)- and (R)-warfarin with a new PK-PD model, in relation to VKORC1 genotypes.

    PubMed

    Ferrari, Myriam; Pengo, Vittorio; Barolo, Massimiliano; Bezzo, Fabrizio; Padrini, Roberto

    2017-06-01

    The purpose of this study is to develop a new pharmacokinetic-pharmacodynamic (PK-PD) model to characterise the contribution of (S)- and (R)-warfarin to the anticoagulant effect on patients in treatment with rac-warfarin. Fifty-seven patients starting warfarin (W) therapy were studied, from the first dose and during chronic treatment at INR stabilization. Plasma concentrations of (S)- and (R)-W and INRs were measured 12, 36 and 60 h after the first dose and at steady state 12-14 h after dosing. Patients were also genotyped for the G>A VKORC1 polymorphism. The PK-PD model assumed a linear relationship between W enantiomer concentration and INR and included a scaling factor k to account for a different potency of (R)-W. Two parallel compartment chains with different transit times (MTT 1 and MTT 2 ) were used to model the delay in the W effect. PD parameters were estimated with the maximum likelihood approach. The model satisfactorily described the mean time-course of INR, both after the initial dose and during long-term treatment. (R)-W contributed to the rac-W anticoagulant effect with a potency of about 27% that of (S)-W. This effect was independent of VKORC1 genotype. As expected, the slope of the PK/PD linear correlation increased stepwise from GG to GA and from GA to AA VKORC1 genotype (0.71, 0.90 and 1.49, respectively). Our PK-PD linear model can quantify the partial pharmacodynamic activity of (R)-W in patients contemporaneously exposed to therapeutic (S)-W plasma levels. This concept may be useful in improving the performance of future algorithms aiming at identifying the most appropriate W maintenance dose.

  12. The geometry of periodic knots, polycatenanes and weaving from a chemical perspective: a library for reticular chemistry.

    PubMed

    Liu, Yuzhong; O'Keeffe, Michael; Treacy, Michael M J; Yaghi, Omar M

    2018-05-04

    The geometry of simple knots and catenanes is described using the concept of linear line segments (sticks) joined at corners. This is extended to include woven linear threads as members of the extended family of knots. The concept of transitivity that can be used as a measure of regularity is explained. Then a review is given of the simplest, most 'regular' 2- and 3-periodic patterns of polycatenanes and weavings. Occurrences in crystal structures are noted but most structures are believed to be new and ripe targets for designed synthesis.

  13. Control of the low-load region in partially premixed combustion

    NASA Astrophysics Data System (ADS)

    Ingesson, Gabriel; Yin, Lianhao; Johansson, Rolf; Tunestal, Per

    2016-09-01

    Partially premixed combustion (PPC) is a low temperature, direct-injection combustion concept that has shown to give promising emission levels and efficiencies over a wide operating range. In this concept, high EGR ratios, high octane-number fuels and early injection timings are used to slow down the auto-ignition reactions and to enhance the fuel and are mixing before the start of combustion. A drawback with this concept is the combustion stability in the low-load region where a high octane-number fuel might cause misfire and low combustion efficiency. This paper investigates the problem of low-load PPC controller design for increased engine efficiency. First, low-load PPC data, obtained from a multi-cylinder heavy- duty engine is presented. The data shows that combustion efficiency could be increased by using a pilot injection and that there is a non-linearity in the relation between injection and combustion timing. Furthermore, intake conditions should be set in order to avoid operating points with unfavourable global equivalence ratio and in-cylinder temperature combinations. Model predictive control simulations were used together with a calibrated engine model to find a gas-system controller that fulfilled this task. The findings are then summarized in a suggested engine controller design. Finally, an experimental performance evaluation of the suggested controller is presented.

  14. Development of a new linearly variable edge filter (LVEF)-based compact slit-less mini-spectrometer

    NASA Astrophysics Data System (ADS)

    Mahmoud, Khaled; Park, Seongchong; Lee, Dong-Hoon

    2018-02-01

    This paper presents the development of a compact charge-coupled detector (CCD) spectrometer. We describe the design, concept and characterization of VNIR linear variable edge filter (LVEF)- based mini-spectrometer. The new instrument has been realized for operation in the 300 nm to 850 nm wavelength range. The instrument consists of a linear variable edge filter in front of CCD array. Low-size, light-weight and low-cost could be achieved using the linearly variable filters with no need to use any moving parts for wavelength selection as in the case of commercial spectrometers available in the market. This overview discusses the main components characteristics, the main concept with the main advantages and limitations reported. Experimental characteristics of the LVEFs are described. The mathematical approach to get the position-dependent slit function of the presented prototype spectrometer and its numerical de-convolution solution for a spectrum reconstruction is described. The performance of our prototype instrument is demonstrated by measuring the spectrum of a reference light source.

  15. An Approach to Study Elastic Vibrations of Fractal Cylinders

    NASA Astrophysics Data System (ADS)

    Steinberg, Lev; Zepeda, Mario

    2016-11-01

    This paper presents our study of dynamics of fractal solids. Concepts of fractal continuum and time had been used in definitions of a fractal body deformation and motion, formulation of conservation of mass, balance of momentum, and constitutive relationships. A linearized model, which was written in terms of fractal time and spatial derivatives, has been employed to study the elastic vibrations of fractal circular cylinders. Fractal differential equations of torsional, longitudinal and transverse fractal wave equations have been obtained and solution properties such as size and time dependence have been revealed.

  16. Analysis and testing of a space crane articulating joint testbed

    NASA Technical Reports Server (NTRS)

    Sutter, Thomas R.; Wu, K. Chauncey

    1992-01-01

    The topics are presented in viewgraph form and include: space crane concept with mobile base; mechanical versus structural articulating joint; articulating joint test bed and reference truss; static and dynamic characterization completed for space crane reference truss configuration; improved linear actuators reduce articulating joint test bed backlash; 1-DOF space crane slew maneuver; boom 2 tip transient response finite element dynamic model; boom 2 tip transient response shear-corrected component modes torque driver profile; peak root member force vs. slew time torque driver profile; and open loop control of space crane motion.

  17. Fertility time trends in dairy herds in northern Portugal.

    PubMed

    Rocha, A; Martins, A; Carvalheira, J

    2010-10-01

    The economics of dairy production are in great part dictated by the reproductive efficiency of the herds. Many studies have reported a widespread decrease in fertility of dairy cows. In a previous work (Rocha et al. 2001), we found a very poor oestrus detection rate (38%), and consequently a delayed calving to 1st AI and calving to conception intervals. However, a good conception rate at 1st AI was noted (51%) resulting in a low number of inseminations per pregnancy (IAP) (1.4). Here, results from a subsequent fertility time trend assessment study carried out in the same region for cows born from 1992 to 2002 are reported. Statistical linear models were used to analyse the data. Estimate linear contrasts of least square means were computed from each model. The number of observations per studied index varied from 12,130 (culling rate) to 57,589 (non-return rate). Mean age at first calving was 28.9 ± 0.14 months, without (p > 0.05) variation over time. There was a small, but significant (p < 0.05), deterioration of all other parameters. Non-return rates at 90 days and calving rate at 1st AI decreased 0.3% per trimester, with a consequent increase of 0.04 IA/parturition. Oestrus detection rate decreased 0.13% per year, and calving at 1st AI and calving-conception intervals increased 0.17 and 0.07 days/year respectively, while intercalving interval increased 1.7 days per year. From 12,130 cows calving, only 1,816 had a 4th lactation (85% culling/losses). The data was not meant to draw conclusions on the causes for the decreased fertility over time, but an increase of milk production from 6537 kg to 8590 kg (305 days) from 1996 to 2002 is probably one factor to take into consideration. Specific measures to revert or slow down this trend of decreasing fertility are warranted. Available strategies are discussed. © 2009 Blackwell Verlag GmbH.

  18. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  19. Linearization of the longitudinal phase space without higher harmonic field

    NASA Astrophysics Data System (ADS)

    Zeitler, Benno; Floettmann, Klaus; Grüner, Florian

    2015-12-01

    Accelerator applications like free-electron lasers, time-resolved electron diffraction, and advanced accelerator concepts like plasma acceleration desire bunches of ever shorter longitudinal extent. However, apart from space charge repulsion, the internal bunch structure and its development along the beam line can limit the achievable compression due to nonlinear phase space correlations. In order to improve such a limited longitudinal focus, a correction by properly linearizing the phase space is required. At large scale facilities like Flash at Desy or the European Xfel, a higher harmonic cavity is installed for this purpose. In this paper, another method is described and evaluated: Expanding the beam after the electron source enables a higher order correction of the longitudinal focus by a subsequent accelerating cavity which is operated at the same frequency as the electron gun. The elaboration of this idea presented here is based on a ballistic bunching scheme, but can be extended to bunch compression based on magnetic chicanes. The core of this article is an analytic model describing this approach, which is verified by simulations, predicting possible bunch length below 1 fs at low bunch charge. Minimizing the energy spread down to σE/E <1 0-5 while keeping the bunch long is another interesting possibility, which finds applications, e.g., in time resolved transmission electron microscopy concepts.

  20. Combined solvent- and non-uniform temperature-programmed gradient liquid chromatography. I - A theoretical investigation.

    PubMed

    Gritti, Fabrice

    2016-11-18

    An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Conical scan impact study. Volume 2: Small local user data processing facility. [multispectral band scanner design alternatives for earth resources data

    NASA Technical Reports Server (NTRS)

    Ebert, D. H.; Chase, P. E.; Dye, J.; Fahline, W. C.; Johnson, R. H.

    1973-01-01

    The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument on a small local-user data processing facility was studied. User data requirements were examined to determine the unique system rquirements for a low cost ground system (LCGS) compatible with the Earth Observatory Satellite (EOS) system. Candidate concepts were defined for the LCGS and preliminary designs were developed for selected concepts. The impact of a conical scan MSS versus a linear scan MSS was evaluated for the selected concepts. It was concluded that there are valid user requirements for the LCGS and, as a result of these requirements, the impact of the conical scanner is minimal, although some new hardware development for the LCGS is necessary to handle conical scan data.

  2. A geometric nonlinear degenerated shell element using a mixed formulation with independently assumed strain fields. Final Report; Ph.D. Thesis, 1989

    NASA Technical Reports Server (NTRS)

    Graf, Wiley E.

    1991-01-01

    A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.

  3. Application and flight test of linearizing transformations using measurement feedback to the nonlinear control problem

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.

    1991-01-01

    The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.

  4. Engineering online and in-person social networks to sustain physical activity: application of a conceptual model.

    PubMed

    Rovniak, Liza S; Sallis, James F; Kraschnewski, Jennifer L; Sciamanna, Christopher N; Kiser, Elizabeth J; Ray, Chester A; Chinchilli, Vernon M; Ding, Ding; Matthews, Stephen A; Bopp, Melissa; George, Daniel R; Hovell, Melbourne F

    2013-08-14

    High rates of physical inactivity compromise the health status of populations globally. Social networks have been shown to influence physical activity (PA), but little is known about how best to engineer social networks to sustain PA. To improve procedures for building networks that shape PA as a normative behavior, there is a need for more specific hypotheses about how social variables influence PA. There is also a need to integrate concepts from network science with ecological concepts that often guide the design of in-person and electronically-mediated interventions. Therefore, this paper: (1) proposes a conceptual model that integrates principles from network science and ecology across in-person and electronically-mediated intervention modes; and (2) illustrates the application of this model to the design and evaluation of a social network intervention for PA. A conceptual model for engineering social networks was developed based on a scoping literature review of modifiable social influences on PA. The model guided the design of a cluster randomized controlled trial in which 308 sedentary adults were randomly assigned to three groups: WalkLink+: prompted and provided feedback on participants' online and in-person social-network interactions to expand networks for PA, plus provided evidence-based online walking program and weekly walking tips; WalkLink: evidence-based online walking program and weekly tips only; Minimal Treatment Control: weekly tips only. The effects of these treatment conditions were assessed at baseline, post-program, and 6-month follow-up. The primary outcome was accelerometer-measured PA. Secondary outcomes included objectively-measured aerobic fitness, body mass index, waist circumference, blood pressure, and neighborhood walkability; and self-reported measures of the physical environment, social network environment, and social network interactions. The differential effects of the three treatment conditions on primary and secondary outcomes will be analyzed using general linear modeling (GLM), or generalized linear modeling if the assumptions for GLM cannot be met. Results will contribute to greater understanding of how to conceptualize and implement social networks to support long-term PA. Establishing social networks for PA across multiple life settings could contribute to cultural norms that sustain active living. ClinicalTrials.gov NCT01142804.

  5. Design Principles as a Guide for Constraint Based and Dynamic Modeling: Towards an Integrative Workflow.

    PubMed

    Sehr, Christiana; Kremling, Andreas; Marin-Sanguino, Alberto

    2015-10-16

    During the last 10 years, systems biology has matured from a fuzzy concept combining omics, mathematical modeling and computers into a scientific field on its own right. In spite of its incredible potential, the multilevel complexity of its objects of study makes it very difficult to establish a reliable connection between data and models. The great number of degrees of freedom often results in situations, where many different models can explain/fit all available datasets. This has resulted in a shift of paradigm from the initially dominant, maybe naive, idea of inferring the system out of a number of datasets to the application of different techniques that reduce the degrees of freedom before any data set is analyzed. There is a wide variety of techniques available, each of them can contribute a piece of the puzzle and include different kinds of experimental information. But the challenge that remains is their meaningful integration. Here we show some theoretical results that enable some of the main modeling approaches to be applied sequentially in a complementary manner, and how this workflow can benefit from evolutionary reasoning to keep the complexity of the problem in check. As a proof of concept, we show how the synergies between these modeling techniques can provide insight into some well studied problems: Ammonia assimilation in bacteria and an unbranched linear pathway with end-product inhibition.

  6. Identification of aerodynamic models for maneuvering aircraft

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward; Hu, C. C.

    1992-01-01

    A Fourier analysis method was developed to analyze harmonic forced-oscillation data at high angles of attack as functions of the angle of attack and its time rate of change. The resulting aerodynamic responses at different frequencies are used to build up the aerodynamic models involving time integrals of the indicial type. An efficient numerical method was also developed to evaluate these time integrals for arbitrary motions based on a concept of equivalent harmonic motion. The method was verified by first using results from two-dimensional and three-dimensional linear theories. The developed models for C sub L, C sub D, and C sub M based on high-alpha data for a 70 deg delta wing in harmonic motions showed accurate results in reproducing hysteresis. The aerodynamic models are further verified by comparing with test data using ramp-type motions.

  7. Discovering Linear Equations in Explicit Tables

    ERIC Educational Resources Information Center

    Burton, Lauren

    2017-01-01

    When teaching algebra concepts to middle school students, the author often hears questions that echo her own past confusion as a young student learning to write linear equations using data tables that show only input and output values. Students, expected to synthesize the relationship between these values in symbolic representation, grow…

  8. The Transformation App Redux: The Notion of Linearity

    ERIC Educational Resources Information Center

    Domenick, Anthony

    2015-01-01

    The notion of linearity is perhaps the most fundamental idea in algebraic thinking. It sets the transition to functions and culminates with the instantaneous rate of change in calculus. Despite its simplicity, this concept poses complexities to a considerable number of first semester college algebra students. The purpose of this observational…

  9. Effects of pathogen-specific clinical mastitis on probability of conception in Holstein dairy cows.

    PubMed

    Hertl, J A; Schukken, Y H; Welcome, F L; Tauer, L W; Gröhn, Y T

    2014-11-01

    The objective of this study was to estimate the effects of pathogen-specific clinical mastitis (CM), occurring in different weekly intervals before or after artificial insemination (AI), on the probability of conception in Holstein cows. Clinical mastitis occurring in weekly intervals from 6 wk before until 6 wk after AI was modeled. The first 4 AI in a cow's lactation were included. The following categories of pathogens were studied: Streptococcus spp. (comprising Streptococcus dysgalactiae, Streptococcus uberis, and other Streptococcus spp.); Staphylococcus aureus; coagulase-negative staphylococci (CNS); Escherichia coli; Klebsiella spp.; cases with CM signs but no bacterial growth (above the level that can be detected from our microbiological procedures) observed in the culture sample and cases with contamination (≥ 3 pathogens in the sample); and other pathogens [including Citrobacter, yeasts, Trueperella pyogenes, gram-negative bacilli (i.e., gram-negative organisms other than E. coli, Klebsiella spp., Enterobacter, and Citrobacter), Corynebacterium bovis, Corynebacterium spp., Pasteurella, Enterococcus, Pseudomonas, Mycoplasma, Prototheca, and others]. Other factors included in the model were parity (1, 2, 3, 4 and higher), season of AI (winter, spring, summer, autumn), day in lactation of first AI, farm, and other non-CM diseases (retained placenta, metritis, ketosis, displaced abomasum). Data from 90,271 AI in 39,361 lactations in 20,328 cows collected from 2003/2004 to 2011 from 5 New York State dairy farms were analyzed in a generalized linear mixed model with a Poisson distribution. The largest reductions in probability of conception were associated with CM occurring in the week before AI or in the 2 wk following AI. Escherichia coli and Klebsiella spp. had the greatest adverse effects on probability of conception. The probability of conception for a cow with any combination of characteristics may be calculated based on the parameter estimates. These findings may be helpful to farmers in assessing reproduction in their dairy cows for more effective cow management. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. A fractal model of effective stress of porous media and the analysis of influence factors

    NASA Astrophysics Data System (ADS)

    Li, Wei; Zhao, Huan; Li, Siqi; Sun, Wenfeng; Wang, Lei; Li, Bing

    2018-03-01

    The basic concept of effective stress describes the characteristics of fluid and solid interaction in porous media. In this paper, based on the theory of fractal geometry, a fractal model was built to analyze the relationship between the microstructure and the effective stress of porous media. From the microscopic point of view, the influence of effective stress on pore structure of porous media was demonstrated. Theoretical analysis and experimental results show that: (i) the fractal model of effective stress can be used to describe the relationship between effective stress and the microstructure of porous media; (ii) a linear increase in the effective stress leads to exponential increases in fractal dimension, porosity and pore number of the porous media, and causes a decreasing trend in the average pore radius.

  11. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    PubMed Central

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  12. Self-reported well-being score modelling and prediction: Proof-of-concept of an approach based on linear dynamic systems.

    PubMed

    Xinyang Li; Poli, Riccardo; Valenza, Gaetano; Scilingo, Enzo Pasquale; Citi, Luca

    2017-07-01

    Assessment and recognition of perceived well-being has wide applications in the development of assistive healthcare systems for people with physical and mental disorders. In practical data collection, these systems need to be less intrusive, and respect users' autonomy and willingness as much as possible. As a result, self-reported data are not necessarily available at all times. Conventional classifiers, which usually require feature vectors of a prefixed dimension, are not well suited for this problem. To address the issue of non-uniformly sampled measurements, in this study we propose a method for the modelling and prediction of self-reported well-being scores based on a linear dynamic system. Within the model, we formulate different features as observations, making predictions even in the presence of inconsistent and irregular data. We evaluate the proposed method with synthetic data, as well as real data from two patients diagnosed with cancer. In the latter, self-reported scores from three well-being-related scales were collected over a period of approximately 60 days. Prompted each day, the patients had the choice whether to respond or not. Results show that the proposed model is able to track and predict the patients' perceived well-being dynamics despite the irregularly sampled data.

  13. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting.

    PubMed

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-02-17

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.

  14. The UMR Conception Cycle of Vocational School Students in Solving Linear Equation

    ERIC Educational Resources Information Center

    Li, Shao-Ying; Leon, Shian

    2013-01-01

    The authors designed instruments from theories and literatures. Data were collected throughout remedial teaching processes and interviewed with vocational school students. By SOLO (structure of the observed learning outcome) taxonomy, the authors made the UMR (unistructural-multistructural-relational sequence) conception cycle of the formative and…

  15. Inertial Sensor Assisted Acquisition, Tracking, and Pointing for High Data Rate Free Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Lee, Shinhak; Ortiz, Gerry G.

    2003-01-01

    We discuss use of inertial sensors to facilitate deep space optical communications. Implementation of this concept requires accurate and wide bandwidth inertial sensors. In this presentation, the principal concept and algorithm using linear accelerometers will be given along with the simulation and experimental results.

  16. Tell a Piecewise Story

    ERIC Educational Resources Information Center

    Sinclair, Nathalie; Armstrong, Alayne

    2011-01-01

    Piecewise linear functions and story graphs are concepts usually associated with algebra, but in the authors' classroom, they found success teaching this topic in a distinctly geometrical manner. The focus of the approach was less on learning geometric concepts and more on using spatial and kinetic reasoning. It not only supports the learning of…

  17. Linear fixed-field multipass arcs for recirculating linear accelerators

    DOE PAGES

    Morozov, V. S.; Bogacz, S. A.; Roblin, Y. R.; ...

    2012-06-14

    Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting themore » dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. Finally, we present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.« less

  18. A MEMS Micro-Translation Stage with Long Linear Translation

    NASA Technical Reports Server (NTRS)

    Ferguson, Cynthia K.; English, J. M.; Nordin, G. P.; Ashley, P. R.; Abushagur, M. A. G.

    2004-01-01

    A MEMS Micro-Translation Stage (MTS) actuator concept has been developed that is capable of traveling long distances, while maintaining low power, low voltage, and accuracy as required by many applications, including optical coupling. The Micro-Translation Stage (MTS) uses capacitive electrostatic forces in a linear motor application, with stationary stators arranged linearly on both sides of a channel, and matching rotors on a moveable shuttle. This creates a force that allows the shuttle to be pulled along the channel. It is designed to carry 100 micron-sized elements on the top surface, and can travel back and forth in the channel, either in a stepping fashion allowing many interim stops, or it can maintain constant adjustable speeds for a controlled scanning motion. The MTS travel range is limited only by the size of the fabrication wafer. Analytical modeling and simulations were performed based on the fabrication process, to assure the stresses, friction and electrostatic forces were acceptable to allow successful operation of this device. The translation forces were analyzed to be near 0.5 micron N, with a 300 micron N stop-to-stop time of 11.8 ms.

  19. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  20. Exploring Divisibility and Summability of 'Photon' Wave Packets in Nonlinear Optical Phenomena

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha; Roychoudhuri, Chandrasekhar

    2009-01-01

    Formulations for second and higher harmonic frequency up and down conversions, as well as multi photon processes directly assume summability and divisibility of photons. Quantum mechanical (QM) interpretations are completely congruent with these assumptions. However, for linear optical phenomena (interference, diffraction, refraction, material dispersion, spectral dispersion, etc.), we have a profound dichotomy. Most optical engineers innovate and analyze all optical instruments by propagating pure classical electromagnetic (EM) fields using Maxwell s equations and gives only lip-service to the concept "indivisible light quanta". Further, irrespective of linearity or nonlinearity of the phenomena, the final results are always registered through some photo-electric or photo-chemical effects. This is mathematically well modeled by a quadratic action (energy absorption) relation. Since QM does not preclude divisibility or summability of photons in nonlinear & multi-photon effects, it cannot have any foundational reason against these same possibilities in linear optical phenomena. It implies that we must carefully revisit the fundamental roots behind all light-matter interaction processes and understand the common origin of "graininess" and "discreteness" of light energy.

  1. MMIC linear-phase and digital modulators for deep space spacecraft X-band transponder applications

    NASA Technical Reports Server (NTRS)

    Mysoor, Narayan R.; Ali, Fazal

    1991-01-01

    The design concepts, analyses, and development of GaAs monolithic microwave integrated circuit (MMIC) linear-phase and digital modulators for the next generation of space-borne communications systems are summarized. The design approach uses a compact lumped element quadrature hybrid and Metal Semiconductor Field Effect Transistors (MESFET)-varactors to provide low loss and well-controlled phase performance for deep space transponder (DST) applications. The measured results of the MESFET-diode show a capacitance range of 2:1 under reverse bias, and a Q of 38 at 10 GHz. Three cascaded sections of hybrid-coupled reflection phase shifters were modeled and simulations performed to provide an X-band (8415 +/- 50 MHz) DST phase modulator with +/- 2.5 radians of peak phase deviation. The modulator will accommodate downlink signal modulation with composite telemetry and ranging data, with a deviation linearity tolerance of +/- 8 percent and insertion loss of less than 8 +/- 0.5 dB. The MMIC digital modulator is designed to provide greater than 10 Mb/s of bi-phase modulation at X-band.

  2. Dependence and independence of survival parameters on linear energy transfer in cells and tissues

    PubMed Central

    Ando, Koichi; Goodhead, Dudley T.

    2016-01-01

    Carbon-ion radiotherapy has been used to treat more than 9000 cancer patients in the world since 1994. Spreading of the Bragg peak is necessary for carbon-ion radiotherapy, and is designed based on the linear–quadratic model that is commonly used for photon therapy. Our recent analysis using in vitro cell kills and in vivo mouse tissue reaction indicates that radiation quality affects mainly the alpha terms, but much less the beta terms, which raises the question of whether this is true in other biological systems. Survival parameters alpha and beta for 45 in vitro mammalian cell lines were obtained by colony formation after irradiation with carbon ions, fast neutrons and X-rays. Relationships between survival parameters and linear energy transfer (LET) below 100 keV/μm were obtained for 4 mammalian cell lines. Mouse skin reaction and tumor growth delay were measured after fractionated irradiation. The Fe-plot provided survival parameters of the tissue reactions. A clear separation between X-rays and high-LET radiation was observed for alpha values, but not for beta values. Alpha values/terms increased with increasing LET in any cells and tissues studied, while beta did not show a systematic change. We have found a puzzle or contradiction in common interpretations of the linear-quadratic model that causes us to question whether the model is appropriate for interpreting biological effectiveness of high-LET radiation up to 500 keV/μm, probably because of inconsistency in the concept of damage interaction. A repair saturation model proposed here was good enough to fit cell kill efficiency by radiation of wide-ranged LET. A model incorporating damage complexity and repair saturation would be suitable for heavy-ion radiotherapy. PMID:27380803

  3. A hazard rate analysis of fertility using duration data from Malaysia.

    PubMed

    Chang, C

    1988-01-01

    Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.

  4. In vivo relationship between pelvis motion and deep fascia displacement of the medial gastrocnemius: anatomical and functional implications.

    PubMed

    Cruz-Montecinos, Carlos; González Blanche, Alberto; López Sánchez, David; Cerda, Mauricio; Sanzana-Cuche, Rodolfo; Cuesta-Vargas, Antonio

    2015-11-01

    Different authors have modelled myofascial tissue connectivity over a distance using cadaveric models, but in vivo models are scarce. The aim of this study was to evaluate the relationship between pelvic motion and deep fascia displacement in the medial gastrocnemius (MG). Deep fascia displacement of the MG was evaluated through automatic tracking with an ultrasound. Angular variation of the pelvis was determined by 2D kinematic analysis. The average maximum fascia displacement and pelvic motion were 1.501 ± 0.78 mm and 6.55 ± 2.47 °, respectively. The result of a simple linear regression between fascia displacement and pelvic motion for three task executions by 17 individuals was r = 0.791 (P < 0.001). Moreover, hamstring flexibility was related to a lower anterior tilt of the pelvis (r = 0.544, P < 0.024) and a lower deep fascia displacement of the MG (r = 0.449, P < 0.042). These results support the concept of myofascial tissue connectivity over a distance in an in vivo model, reinforce the functional concept of force transmission through synergistic muscle groups, and grant new perspectives for the role of fasciae in restricting movement in remote zones. © 2015 Anatomical Society.

  5. In vivo relationship between pelvis motion and deep fascia displacement of the medial gastrocnemius: anatomical and functional implications

    PubMed Central

    Cruz-Montecinos, Carlos; González Blanche, Alberto; López Sánchez, David; Cerda, Mauricio; Sanzana-Cuche, Rodolfo; Cuesta-Vargas, Antonio

    2015-01-01

    Different authors have modelled myofascial tissue connectivity over a distance using cadaveric models, but in vivo models are scarce. The aim of this study was to evaluate the relationship between pelvic motion and deep fascia displacement in the medial gastrocnemius (MG). Deep fascia displacement of the MG was evaluated through automatic tracking with an ultrasound. Angular variation of the pelvis was determined by 2D kinematic analysis. The average maximum fascia displacement and pelvic motion were 1.501 ± 0.78 mm and 6.55 ± 2.47 °, respectively. The result of a simple linear regression between fascia displacement and pelvic motion for three task executions by 17 individuals was r = 0.791 (P < 0.001). Moreover, hamstring flexibility was related to a lower anterior tilt of the pelvis (r = 0.544, P < 0.024) and a lower deep fascia displacement of the MG (r = 0.449, P < 0.042). These results support the concept of myofascial tissue connectivity over a distance in an in vivo model, reinforce the functional concept of force transmission through synergistic muscle groups, and grant new perspectives for the role of fasciae in restricting movement in remote zones. PMID:26467242

  6. Selectivity in analytical chemistry: two interpretations for univariate methods.

    PubMed

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Impedance-based overcharging and gassing model for VRLA/AGM batteries

    NASA Astrophysics Data System (ADS)

    Thele, M.; Karden, E.; Surewaard, E.; Sauer, D. U.

    This paper presents for the first time an impedance-based non-linear model for lead-acid batteries that is applicable in all operational modes. An overcharging model describes the accumulation and depletion of the dissolved Pb 2+ ions. This physical model has been added to the earlier presented model to expand the model validity. To properly represent the charge acceptance during dynamic operation, a concept of "hardening crystals" has been introduced in the model. Moreover, a detailed gassing and oxygen recombination model has been integrated. A realistic simulation of the overcharging behavior is now possible. The mathematical description is given in the paper. Simplifications are introduced that allow for an efficient implementation and for model parameterization in the time domain. A comparison between experimental data and simulation results demonstrates the achieved accuracy. The model enhancement is of major importance to analyze charging strategies especially in partial-cycling operation with limited charging time, e.g. in electrically assisted or hybrid cars and autonomous power supply systems.

  8. GPS Auto-Navigation Design for Unmanned Air Vehicles

    NASA Technical Reports Server (NTRS)

    Nilsson, Caroline C. A.; Heinzen, Stearns N.; Hall, Charles E., Jr.; Chokani, Ndaona

    2003-01-01

    A GPS auto-navigation system is designed for Unmanned Air Vehicles. The objective is to enable the air vehicle to be used as a test-bed for novel flow control concepts. The navigation system uses pre-programmed GPS waypoints. The actual GPS position, heading, and velocity are collected by the flight computer, a PC104 system running in Real-Time Linux, and compared with the desired waypoint. The navigator then determines the necessity of a heading correction and outputs the correction in the form of a commanded bank angle, for a level coordinated turn, to the controller system. This controller system consists of 5 controller! (pitch rate PID, yaw damper, bank angle PID, velocity hold, and altitude hold) designed for a closed loop non-linear aircraft model with linear aerodynamic coefficients. The ability and accuracy of using GPS data, is validated by a GPS flight. The autopilots are also validated in flight. The autopilot unit flight validations show that the designed autopilots function as designed. The aircraft model, generated on Matlab SIMULINK is also enhanced by the flight data to accurately represent the actual aircraft.

  9. Macro-spin modeling and experimental study of spin-orbit torque biased magnetic sensors

    NASA Astrophysics Data System (ADS)

    Xu, Yanjun; Yang, Yumeng; Luo, Ziyan; Xu, Baoxi; Wu, Yihong

    2017-11-01

    We reported a systematic study of spin-orbit torque biased magnetic sensors based on NiFe/Pt bilayers through both macro-spin modeling and experiments. The simulation results show that it is possible to achieve a linear sensor with a dynamic range of 0.1-10 Oe, power consumption of 1 μW-1mW, and sensitivity of 0.1-0.5 Ω/Oe. These characteristics can be controlled by varying the sensor dimension and current density in the Pt layer. The latter is in the range of 1 × 105-107 A/cm2. Experimental results of fabricated sensors with selected sizes agree well with the simulation results. For a Wheatstone bridge sensor comprising of four sensing elements, a sensitivity up to 0.548 Ω/Oe, linearity error below 6%, and detectivity of about 2.8 nT/√Hz were obtained. The simple structure and ultrathin thickness greatly facilitate the integration of these sensors for on-chip applications. As a proof-of-concept experiment, we demonstrate its application in detection of current flowing in an on-chip Cu wire.

  10. Design of Helical Capacitance Sensor for Holdup Measurement in Two-Phase Stratified Flow: A Sinusoidal Function Approach

    PubMed Central

    Lim, Lam Ghai; Pao, William K. S.; Hamid, Nor Hisham; Tang, Tong Boon

    2016-01-01

    A 360° twisted helical capacitance sensor was developed for holdup measurement in horizontal two-phase stratified flow. Instead of suppressing nonlinear response, the sensor was optimized in such a way that a ‘sine-like’ function was displayed on top of the linear function. This concept of design had been implemented and verified in both software and hardware. A good agreement was achieved between the finite element model of proposed design and the approximation model (pure sinusoidal function), with a maximum difference of ±1.2%. In addition, the design parameters of the sensor were analysed and investigated. It was found that the error in symmetry of the sinusoidal function could be minimized by adjusting the pitch of helix. The experiments of air-water and oil-water stratified flows were carried out and validated the sinusoidal relationship with a maximum difference of ±1.2% and ±1.3% for the range of water holdup from 0.15 to 0.85. The proposed design concept therefore may pose a promising alternative for the optimization of capacitance sensor design. PMID:27384567

  11. Status on Iterative Transform Phase Retrieval Applied to the GBT Data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Scott; Shiri, Ron; Hollis, Jan M.; Lyons, Richard; Prestage, Richard; Hunter, Todd; Ghigo, Frank; Nikolic, Bojan

    2007-01-01

    This slide presentation reviews the use of iterative transform phase retrieval in the analysis of the Green Bank Radio Telescope (GBT) Data. It reviews the NASA projects that have used phase retrieval, and the testbed for the algorithm to be used for the James Webb Space Telescope. It shows the comparison of phase retrieval with an interferometer, and reviews the two approaches used for phase retrieval, iterative transform (ITA) or parametric (non-linear least squares model fitting). The concept of ITA Phase Retrieval is reviewed, and the application to Radio Antennas is reviewed. The presentation also examines the National Radio Astronomy Observatory (NRAO) data from the GBT, and the Fourier model that NRAO uses to analyze the data. The challenge for ITA phase retrieval is reviewed, and the coherent approximation for incoherent data is shown. The validity of the approximation is good for a large tilt. There is a review of the proof of concept of the Phase Review simulation using the input wavefront, and the initial sampling parameters estimate from the focused GBT data.

  12. Modeling Nonlinear Acoustic Standing Waves in Resonators: Theory and Experiments

    NASA Technical Reports Server (NTRS)

    Raman, Ganesh; Li, Xiaofan; Finkbeiner, Joshua

    2004-01-01

    The overall goal of the cooperative research with NASA Glenn is to fundamentally understand, computationally model, and experimentally validate non-linear acoustic waves in enclosures with the ultimate goal of developing a non-contact acoustic seal. The longer term goal is to transition the Glenn acoustic seal innovation to a prototype sealing device. Lucas and coworkers are credited with pioneering work in Resonant Macrosonic Synthesis (RMS). Several Patents and publications have successfully illustrated the concept of Resonant Macrosonic Synthesis. To utilize this concept in practical application one needs to have an understanding of the details of the phenomenon and a predictive tool that can examine the waveforms produced within resonators of complex shapes. With appropriately shaped resonators one can produce un-shocked waveforms of high amplitude that would result in very high pressures in certain regions. Our goal is to control the waveforms and exploit the high pressures to produce an acoustic seal. Note that shock formation critically limits peak-to-peak pressure amplitudes and also causes excessive energy dissipation. Proper shaping of the resonator is thus critical to the use of this innovation.

  13. On the distinguishability of HRF models in fMRI.

    PubMed

    Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J

    2015-01-01

    Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.

  14. Toward intelligent information sysytem

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo

    "Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.

  15. A combined QSAR and partial order ranking approach to risk assessment.

    PubMed

    Carlsen, L

    2006-04-01

    QSAR generated data appear as an attractive alternative to experimental data as foreseen in the proposed new chemicals legislation REACH. A preliminary risk assessment for the aquatic environment can be based on few factors, i.e. the octanol-water partition coefficient (Kow), the vapour pressure (VP) and the potential biodegradability of the compound in combination with the predicted no-effect concentration (PNEC) and the actual tonnage in which the substance is produced. Application of partial order ranking, allowing simultaneous inclusion of several parameters leads to a mutual prioritisation of the investigated substances, the prioritisation possibly being further analysed through the concept of linear extensions and average ranks. The ranking uses endpoint values (log Kow and log VP) derived from strictly linear 'noise-deficient' QSAR models as input parameters. Biodegradation estimates were adopted from the BioWin module of the EPI Suite. The population growth impairment of Tetrahymena pyriformis was used as a surrogate for fish lethality.

  16. An experimental and analytical investigation of stall effects on flap-lag stability in forward flight

    NASA Technical Reports Server (NTRS)

    Nagabhushanam, J.; Gaonkar, Gopal H.; Mcnulty, Michael J.

    1987-01-01

    Experiments have been performed with a 1.62 m diameter hingeless rotor in a wind tunnel to investigate flap-lag stability of isolated rotors in forward flight. The three-bladed rotor model closely approaches the simple theoretical concept of a hingeless rotor as a set of rigid, articulated flap-lag blades with offset and spring restrained flap and lag hinges. Lag regressing mode stability data was obtained for advance ratios as high as 0.55 for various combinations of collective pitch and shaft angle. The prediction includes quasi-steady stall effects on rotor trim and Floquet stability analyses. Correlation between data and prediction is presented and is compared with that of an earlier study based on a linear theory without stall effects. While the results with stall effects show marked differences from the linear theory results, the stall theory still falls short of adequate agreement with the experimental data.

  17. Three-Dimensional Simulation of Liquid Drop Dynamics Within Unsaturated Vertical Hele-Shaw Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hai Huang; Paul Meakin

    A three-dimensional, multiphase fluid flow model with volume of fluid-interface tracking was developed and applied to study the multiphase dynamics of moving liquid drops of different sizes within vertical Hele-Shaw cells. The simulated moving velocities are significantly different from those obtained from a first-order analytical approximation, based on simple force-balance concepts. The simulation results also indicate that the moving drops can exhibit a variety of shapes and that the transition among these different shapes is largely determined by the moving velocities. More important, there is a transition from a linear moving regime at small capillary numbers, in which the capillarymore » number scales linearly with the Bond number, to a nonlinear moving regime at large capillary numbers, in which the moving drop releases a train of droplets from its trailing edge. The train of droplets forms a variety of patterns at different moving velocities.« less

  18. Thermoelectric efficiency of nanoscale devices in the linear regime

    NASA Astrophysics Data System (ADS)

    Bevilacqua, G.; Grosso, G.; Menichetti, G.; Pastori Parravicini, G.

    2016-12-01

    We study quantum transport through two-terminal nanoscale devices in contact with two particle reservoirs at different temperatures and chemical potentials. We discuss the general expressions controlling the electric charge current, heat currents, and the efficiency of energy transmutation in steady conditions in the linear regime. With focus in the parameter domain where the electron system acts as a power generator, we elaborate workable expressions for optimal efficiency and thermoelectric parameters of nanoscale devices. The general concepts are set at work in the paradigmatic cases of Lorentzian resonances and antiresonances, and the encompassing Fano transmission function: the treatments are fully analytic, in terms of the trigamma functions and Bernoulli numbers. From the general curves here reported describing transport through the above model transmission functions, useful guidelines for optimal efficiency and thermopower can be inferred for engineering nanoscale devices in energy regions where they show similar transmission functions.

  19. Calculation of open and closed system elastic coefficients for multicomponent solids

    NASA Astrophysics Data System (ADS)

    Mishin, Y.

    2015-06-01

    Thermodynamic equilibrium in multicomponent solids subject to mechanical stresses is a complex nonlinear problem whose exact solution requires extensive computations. A few decades ago, Larché and Cahn proposed a linearized solution of the mechanochemical equilibrium problem by introducing the concept of open system elastic coefficients [Acta Metall. 21, 1051 (1973), 10.1016/0001-6160(73)90021-7]. Using the Ni-Al solid solution as a model system, we demonstrate that open system elastic coefficients can be readily computed by semigrand canonical Monte Carlo simulations in conjunction with the shape fluctuation approach. Such coefficients can be derived from a single simulation run, together with other thermodynamic properties needed for prediction of compositional fields in solid solutions containing defects. The proposed calculation approach enables streamlined solutions of mechanochemical equilibrium problems in complex alloys. Second order corrections to the linear theory are extended to multicomponent systems.

  20. Analytical and Experimental Characterization of a Linear-Array Thermopile Scanning Radiometer for Geo-Synchronous Earth Radiation Budget Applications

    NASA Technical Reports Server (NTRS)

    Sorensen, Ira J.

    1998-01-01

    The Thermal Radiation Group, a laboratory in the department of Mechanical Engineering at Virginia Polytechnic Institute and State University, is currently working towards the development of a new technology for cavity-based radiometers. The radiometer consists of a 256-element linear-array thermopile detector mounted on the wall of a mirrored wedgeshaped cavity. The objective of this research is to provide analytical and experimental characterization of the proposed radiometer. A dynamic end-to-end opto-electrothermal model is developed to simulate the performance of the radiometer. Experimental results for prototype thermopile detectors are included. Also presented is the concept of the discrete Green's function to characterize the optical scattering of radiant energy in the cavity, along with a data-processing algorithm to correct for the scattering. Finally, a parametric study of the sensitivity of the discrete Green's function to uncertainties in the surface properties of the cavity is presented.

  1. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    NASA Astrophysics Data System (ADS)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  2. Development and Preliminary Testing of a High Precision Long Stroke Slit Change Mechanism for the SPICE Instrument

    NASA Technical Reports Server (NTRS)

    Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc

    2014-01-01

    In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.

  3. The Meaning of Recovery from Co-Occurring Disorder: Views from Consumers and Staff Members Living and Working in Housing First Programming

    PubMed Central

    Rollins, Angela L.

    2015-01-01

    The current study seeks to understand the concept of recovery from the perspectives of consumers and staff living and working in a supportive housing model designed to serve those with co-occurring disorder. Interview and focus group data were collected from consumers and staff from four housing programs. Data analyzed using an approach that combined case study and grounded theory methodologies demonstrate that: consumers’ and staff members’ views of recovery were highly compatible and resistant to abstinence-based definitions of recovery; recovery is personal; stability is a foundation for recovery; recovery is a process; and the recovery process is not linear. These themes are more consistent with mental health-focused conceptions of recovery than those traditionally used within the substance abuse field, and they help demonstrate how recovery can be influenced by the organization of services in which consumers are embedded. PMID:26388709

  4. Fast Formal Analysis of Requirements via "Topoi Diagrams"

    NASA Technical Reports Server (NTRS)

    Menzies, Tim; Powell, John; Houle, Michael E.; Kelly, John C. (Technical Monitor)

    2001-01-01

    Early testing of requirements can decrease the cost of removing errors in software projects. However, unless done carefully, that testing process can significantly add to the cost of requirements analysis. We show here that requirements expressed as topoi diagrams can be built and tested cheaply using our SP2 algorithm, the formal temporal properties of a large class of topoi can be proven very quickly, in time nearly linear in the number of nodes and edges in the diagram. There are two limitations to our approach. Firstly, topoi diagrams cannot express certain complex concepts such as iteration and sub-routine calls. Hence, our approach is more useful for requirements engineering than for traditional model checking domains. Secondly, out approach is better for exploring the temporal occurrence of properties than the temporal ordering of properties. Within these restrictions, we can express a useful range of concepts currently seen in requirements engineering, and a wide range of interesting temporal properties.

  5. The electromigration force in metallic bulk

    NASA Astrophysics Data System (ADS)

    Lodder, A.; Dekker, J. P.

    1998-01-01

    The voltage induced driving force on a migrating atom in a metallic system is discussed in the perspective of the Hellmann-Feynman force concept, local screening concepts and the linear-response approach. Since the force operator is well defined in quantum mechanics it appears to be only confusing to refer to the Hellmann-Feynman theorem in the context of electromigration. Local screening concepts are shown to be mainly of historical value. The physics involved is completely represented in ab initio local density treatments of dilute alloys and the implementation does not require additional precautions about screening, being typical for jellium treatments. The linear-response approach is shown to be a reliable guide in deciding about the two contributions to the driving force, the direct force and the wind force. Results are given for the wind valence for electromigration in a number of FCC and BCC metals, calculated using an ab initio KKR-Green's function description of a dilute alloy.

  6. Energetics of slope flows: linear and weakly nonlinear solutions of the extended Prandtl model

    NASA Astrophysics Data System (ADS)

    Güttler, Ivan; Marinović, Ivana; Večenaj, Željko; Grisogono, Branko

    2016-07-01

    The Prandtl model succinctly combines the 1D stationary boundary-layer dynamics and thermodynamics of simple anabatic and katabatic flows over uniformly inclined surfaces. It assumes a balance between the along-the-slope buoyancy component and adiabatic warming/cooling, and the turbulent mixing of momentum and heat. In this study, energetics of the Prandtl model is addressed in terms of the total energy (TE) concept. Furthermore, since the authors recently developed a weakly nonlinear version of the Prandtl model, the TE approach is also exercised on this extended model version, which includes an additional nonlinear term in the thermodynamic equation. Hence, interplay among diffusion, dissipation and temperature-wind interaction of the mean slope flow is further explored. The TE of the nonlinear Prandtl model is assessed in an ensemble of solutions where the Prandtl number, the slope angle and the nonlinearity parameter are perturbed. It is shown that nonlinear effects have the lowest impact on variability in the ensemble of solutions of the weakly nonlinear Prandtl model when compared to the other two governing parameters. The general behavior of the nonlinear solution is similar to the linear solution, except that the maximum of the along-the-slope wind speed in the nonlinear solution reduces for larger slopes. Also, the dominance of PE near the sloped surface, and the elevated maximum of KE in the linear and nonlinear energetics of the extended Prandtl model are found in the PASTEX-94 measurements. The corresponding level where KE>PE most likely marks the bottom of the sublayer subject to shear-driven instabilities. Finally, possible limitations of the weakly nonlinear solutions of the extended Prandtl model are raised. In linear solutions, the local storage of TE term is zero, reflecting the stationarity of solutions by definition. However, in nonlinear solutions, the diffusion, dissipation and interaction terms (where the height of the maximum interaction is proportional to the height of the low-level jet by the factor ≈4/9) do not balance and the local storage of TE attains non-zero values. In order to examine the issue of non-stationarity, the inclusion of velocity-pressure covariance in the momentum equation is suggested for future development of the extended Prandtl model.

  7. Characteristic operator functions for quantum input-plant-output models and coherent control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, John E.

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less

  8. Linear Response Laws and Causality in Electrodynamics

    ERIC Educational Resources Information Center

    Yuffa, Alex J.; Scales, John A.

    2012-01-01

    Linear response laws and causality (the effect cannot precede the cause) are of fundamental importance in physics. In the context of classical electrodynamics, students often have a difficult time grasping these concepts because the physics is obscured by the intermingling of the time and frequency domains. In this paper, we analyse the linear…

  9. Subspace in Linear Algebra: Investigating Students' Concept Images and Interactions with the Formal Definition

    ERIC Educational Resources Information Center

    Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.

    2011-01-01

    This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…

  10. Mat-Rix-Toe: Improving Writing through a Game-Based Project in Linear Algebra

    ERIC Educational Resources Information Center

    Graham-Squire, Adam; Farnell, Elin; Stockton, Julianna Connelly

    2014-01-01

    The Mat-Rix-Toe project utilizes a matrix-based game to deepen students' understanding of linear algebra concepts and strengthen students' ability to express themselves mathematically. The project was administered in three classes using slightly different approaches, each of which included some editing component to encourage the…

  11. An Application of the Vandermonde Determinant

    ERIC Educational Resources Information Center

    Xu, Junqin; Zhao, Likuan

    2006-01-01

    Eigenvalue is an important concept in Linear Algebra. It is well known that the eigenvectors corresponding to different eigenvalues of a square matrix are linear independent. In most of the existing textbooks, this result is proven using mathematical induction. In this note, a new proof using Vandermonde determinant is given. It is shown that this…

  12. Construction and reconstruction concept in mathematics instruction

    NASA Astrophysics Data System (ADS)

    Mumu, Jeinne; Charitas Indra Prahmana, Rully; Tanujaya, Benidiktus

    2017-12-01

    The purpose of this paper is to describe two learning activities undertaken by lecturers, so that students can understand a mathematical concept. The mathematical concept studied in this research is the Vector Space in Linear Algebra instruction. Classroom Action Research used as a research method with pre-service mathematics teacher at University of Papua as the research subject. Student participants are divided into two parallel classes, 24 students in regular class, and remedial class consist of 18 students. Both approaches, construct and reconstruction concept, are implemented on both classes. The result shows that concept construction can only be done in regular class while in remedial class, learning with concept construction approach is not able to increase students' understanding on the concept taught. Understanding the concept of a student in a remedial class can only be carried out using the concept reconstruction approach.

  13. Discrete dynamic modeling of cellular signaling networks.

    PubMed

    Albert, Réka; Wang, Rui-Sheng

    2009-01-01

    Understanding signal transduction in cellular systems is a central issue in systems biology. Numerous experiments from different laboratories generate an abundance of individual components and causal interactions mediating environmental and developmental signals. However, for many signal transduction systems there is insufficient information on the overall structure and the molecular mechanisms involved in the signaling network. Moreover, lack of kinetic and temporal information makes it difficult to construct quantitative models of signal transduction pathways. Discrete dynamic modeling, combined with network analysis, provides an effective way to integrate fragmentary knowledge of regulatory interactions into a predictive mathematical model which is able to describe the time evolution of the system without the requirement for kinetic parameters. This chapter introduces the fundamental concepts of discrete dynamic modeling, particularly focusing on Boolean dynamic models. We describe this method step-by-step in the context of cellular signaling networks. Several variants of Boolean dynamic models including threshold Boolean networks and piecewise linear systems are also covered, followed by two examples of successful application of discrete dynamic modeling in cell biology.

  14. Working the System

    ERIC Educational Resources Information Center

    Berks, Darla R.; Vlasnik, Amber N.

    2014-01-01

    Unfortunately, many students learn about the concept of systems of linear equations in a procedural way. The lessons are taught as three discrete methods. Connections between the methods, in many cases, are not made. As a result, the students' overall understanding of the concept is very limited. By the time the teacher reaches the end of the…

  15. Experienced and Novice Teachers' Concepts of Spatial Scale

    ERIC Educational Resources Information Center

    Jones, M. Gail; Tretter, Thomas; Taylor, Amy; Oppewal, Tom

    2008-01-01

    Scale is one of the thematic threads that runs through nearly all of the sciences and is considered one of the major prevailing ideas of science. This study explored novice and experienced teachers' concepts of spatial scale with a focus on linear sizes from very small (nanoscale) to very large (cosmic scale). Novice teachers included…

  16. Design and simulation of a descent controller for strategic four-dimensional aircraft navigation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lax, F. M.

    1975-01-01

    A time-controlled navigation system applicable to the descent phase of flight for airline transport aircraft was developed and simulated. The design incorporates the linear discrete-time sampled-data version of the linearized continuous-time system describing the aircraft's aerodynamics. Using optimal linear quadratic control techniques, an optimal deterministic control regulator which is implementable on an airborne computer is designed. The navigation controller assists the pilot in complying with assigned times of arrival along a four-dimensional flight path in the presence of wind disturbances. The strategic air traffic control concept is also described, followed by the design of a strategic control descent path. A strategy for determining possible times of arrival at specified waypoints along the descent path and for generating the corresponding route-time profiles that are within the performance capabilities of the aircraft is presented. Using a mathematical model of the Boeing 707-320B aircraft along with a Boeing 707 cockpit simulator interfaced with an Adage AGT-30 digital computer, a real-time simulation of the complete aircraft aerodynamics was achieved. The strategic four-dimensional navigation controller for longitudinal dynamics was tested on the nonlinear aircraft model in the presence of 15, 30, and 45 knot head-winds. The results indicate that the controller preserved the desired accuracy and precision of a time-controlled aircraft navigation system.

  17. On the usefulness of 'what' and 'where' pathways in vision.

    PubMed

    de Haan, Edward H F; Cowey, Alan

    2011-10-01

    The primate visual brain is classically portrayed as a large number of separate 'maps', each dedicated to the processing of specific visual cues, such as colour, motion or faces and their many features. In order to understand this fractionated architecture, the concept of cortical 'pathways' or 'streams' was introduced. In the currently prevailing view, the different maps are organised hierarchically into two major pathways, one involved in recognition and memory (the ventral stream or 'what' pathway) and the other in the programming of action (the dorsal stream or 'where' pathway). In this review, we question this heuristically influential but potentially misleading linear hierarchical pathway model and argue instead for a 'patchwork' or network model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  19. Simultaneously driven linear and nonlinear spatial encoding fields in MRI.

    PubMed

    Gallichan, Daniel; Cocosco, Chris A; Dewdney, Andrew; Schultz, Gerrit; Welz, Anna; Hennig, Jürgen; Zaitsev, Maxim

    2011-03-01

    Spatial encoding in MRI is conventionally achieved by the application of switchable linear encoding fields. The general concept of the recently introduced PatLoc (Parallel Imaging Technique using Localized Gradients) encoding is to use nonlinear fields to achieve spatial encoding. Relaxing the requirement that the encoding fields must be linear may lead to improved gradient performance or reduced peripheral nerve stimulation. In this work, a custom-built insert coil capable of generating two independent quadratic encoding fields was driven with high-performance amplifiers within a clinical MR system. In combination with the three linear encoding fields, the combined hardware is capable of independently manipulating five spatial encoding fields. With the linear z-gradient used for slice-selection, there remain four separate channels to encode a 2D-image. To compare trajectories of such multidimensional encoding, the concept of a local k-space is developed. Through simulations, reconstructions using six gradient-encoding strategies were compared, including Cartesian encoding separately or simultaneously on both PatLoc and linear gradients as well as two versions of a radial-based in/out trajectory. Corresponding experiments confirmed that such multidimensional encoding is practically achievable and demonstrated that the new radial-based trajectory offers the PatLoc property of variable spatial resolution while maintaining finite resolution across the entire field-of-view. Copyright © 2010 Wiley-Liss, Inc.

  20. MATLAB as an incentive for student learning of skills

    NASA Astrophysics Data System (ADS)

    Bank, C. G.; Ghent, R. R.

    2016-12-01

    Our course "Computational Geology" takes a holistic approach to student learning by using MATLAB as a focal point to increase students' computing, quantitative reasoning, data analysis, report writing, and teamwork skills. The course, taught since 2007 with recent enrollments around 35 and aimed at 2nd to 3rd-year students, is required for the Geology and Earth and Environmental Systems major programs, and can be chosen as elective in our other programs, including Geophysics. The course is divided into five projects: Pacific plate velocity from the Hawaiian hotspot track, predicting CO2 concentration in the atmosphere, volume of Earth's oceans and sea-level rise, comparing wind directions for Vancouver and Squamish, and groundwater flow. Each project is based on real data, focusses on a mathematical concept (linear interpolation, gradients, descriptive statistics, differential equations) and highlights a programming task (arrays, functions, text file input/output, curve fitting). Working in teams of three, students need to develop a conceptional model to explain the data, and write MATLAB code to visualize the data and match it to their conceptional model. The programming is guided, and students work individually on different aspects (for example: reading the data, fitting a function, unit conversion) which they need to put together to solve the problem. They then synthesize their thought process in a paper. Anecdotal evidence shows that students continue using MATLAB in other courses.

  1. Black-box modeling to estimate tissue temperature during radiofrequency catheter cardiac ablation: Feasibility study on an agar phantom model.

    PubMed

    Blasco-Gimenez, Ramón; Lequerica, Juan L; Herrero, Maria; Hornero, Fernando; Berjano, Enrique J

    2010-04-01

    The aim of this work was to study linear deterministic models to predict tissue temperature during radiofrequency cardiac ablation (RFCA) by measuring magnitudes such as electrode temperature, power and impedance between active and dispersive electrodes. The concept involves autoregressive models with exogenous input (ARX), which is a particular case of the autoregressive moving average model with exogenous input (ARMAX). The values of the mode parameters were determined from a least-squares fit of experimental data. The data were obtained from radiofrequency ablations conducted on agar models with different contact pressure conditions between electrode and agar (0 and 20 g) and different flow rates around the electrode (1, 1.5 and 2 L min(-1)). Half of all the ablations were chosen randomly to be used for identification (i.e. determination of model parameters) and the other half were used for model validation. The results suggest that (1) a linear model can be developed to predict tissue temperature at a depth of 4.5 mm during RF cardiac ablation by using the variables applied power, impedance and electrode temperature; (2) the best model provides a reasonably accurate estimate of tissue temperature with a 60% probability of achieving average errors better than 5 degrees C; (3) substantial errors (larger than 15 degrees C) were found only in 6.6% of cases and were associated with abnormal experiments (e.g. those involving the displacement of the ablation electrode) and (4) the impact of measuring impedance on the overall estimate is negligible (around 1 degrees C).

  2. Extended probit mortality model for zooplankton against transient change of PCO(2).

    PubMed

    Sato, Toru; Watanabe, Yuji; Toyota, Koji; Ishizaka, Joji

    2005-09-01

    The direct injection of CO(2) in the deep ocean is a promising way to mitigate global warming. One of the uncertainties in this method, however, is its impact on marine organisms in the near field. Since the concentration of CO(2), which organisms experience in the ocean, changes with time, it is required to develop a biological impact model for the organisms against the unsteady change of CO(2) concentration. In general, the LC(50) concept is widely applied for testing a toxic agent for the acute mortality. Here, we regard the probit-transformed mortality as a linear function not only of the concentration of CO(2) but also of exposure time. A simple mathematical transform of the function gives a damage-accumulation mortality model for zooplankton. In this article, this model was validated by the mortality test of Metamphiascopsis hirsutus against the transient change of CO(2) concentration.

  3. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  4. Nonlinear Dynamic Modeling and Controls Development for Supersonic Propulsion System Research

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Kopasakis, George; Paxson, Daniel E.; Stuber, Eric; Woolwine, Kyle

    2012-01-01

    This paper covers the propulsion system component modeling and controls development of an integrated nonlinear dynamic simulation for an inlet and engine that can be used for an overall vehicle (APSE) model. The focus here is on developing a methodology for the propulsion model integration, which allows for controls design that prevents inlet instabilities and minimizes the thrust oscillation experienced by the vehicle. Limiting thrust oscillations will be critical to avoid exciting vehicle aeroelastic modes. Model development includes both inlet normal shock position control and engine rotor speed control for a potential supersonic commercial transport. A loop shaping control design process is used that has previously been developed for the engine and verified on linear models, while a simpler approach is used for the inlet control design. Verification of the modeling approach is conducted by simulating a two-dimensional bifurcated inlet and a representative J-85 jet engine previously used in a NASA supersonics project. Preliminary results are presented for the current supersonics project concept variable cycle turbofan engine design.

  5. A biomechanical testing system to determine micromotion between hip implant and femur accounting for deformation of the hip implant: Assessment of the influence of rigid body assumptions on micromotions measurements.

    PubMed

    Leuridan, Steven; Goossens, Quentin; Roosen, Jorg; Pastrav, Leonard; Denis, Kathleen; Mulier, Michiel; Desmet, Wim; Vander Sloten, Jos

    2017-02-01

    Accurate pre-clinical evaluation of the initial stability of new cementless hip stems using in vitro micromotion measurements is an important step in the design process to assess the new stem's potential. Several measuring systems, linear variable displacement transducer-based and other, require assuming bone or implant to be rigid to obtain micromotion values or to calculate derived quantities such as relative implant tilting. An alternative linear variable displacement transducer-based measuring system not requiring a rigid body assumption was developed in this study. The system combined advantages of local unidirectional and frame-and-bracket micromotion measuring concepts. The influence and possible errors that would be made by adopting a rigid body assumption were quantified. Furthermore, as the system allowed emulating local unidirectional and frame-and-bracket systems, the influence of adopting rigid body assumptions were also analyzed for both concepts. Synthetic and embalmed bone models were tested in combination with primary and revision implants. Single-legged stance phase loading was applied to the implant - bone constructs. Adopting a rigid body assumption resulted in an overestimation of mediolateral micromotion of up to 49.7μm at more distal measuring locations. Maximal average relative rotational motion was overestimated by 0.12° around the anteroposterior axis. Frontal and sagittal tilting calculations based on a unidirectional measuring concept underestimated the true tilting by an order of magnitude. Non-rigid behavior is a factor that should not be dismissed in micromotion stability evaluations of primary and revision femoral implants. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation

    PubMed Central

    Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381

  7. Effect of Intensity-Modulated Pelvic Radiotherapy on Second Cancer Risk in the Postoperative Treatment of Endometrial and Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwahlen, Daniel R.; Department of Radiation Oncology, University Hospital Zurich, Zurich; Ruben, Jeremy D.

    2009-06-01

    Purpose: To estimate and compare intensity-modulated radiotherapy (IMRT) with three-dimensional conformal radiotherapy (3DCRT) in terms of second cancer risk (SCR) for postoperative treatment of endometrial and cervical cancer. Methods and Materials: To estimate SCR, the organ equivalent dose concept with a linear-exponential, a plateau, and a linear dose-response model was applied to dose distributions, calculated in a planning computed tomography scan of a 68-year-old woman. Three plans were computed: four-field 18-MV 3DCRT and nine-field IMRT with 6- and 18-MV photons. SCR was estimated as a function of target dose (50.4 Gy/28 fractions) in organs of interest according to the Internationalmore » Commission on Radiological Protection Results: Cumulative SCR relative to 3DCRT was +6% (3% for a plateau model, -4% for a linear model) for 6-MV IMRT and +26% (25%, 4%) for the 18-MV IMRT plan. For an organ within the primary beam, SCR was +12% (0%, -12%) for 6-MV and +5% (-2%, -7%) for 18-MV IMRT. 18-MV IMRT increased SCR 6-7 times for organs away from the primary beam relative to 3DCRT and 6-MV IMRT. Skin SCR increased by 22-37% for 6-MV and 50-69% for 18-MV IMRT inasmuch as a larger volume of skin was exposed. Conclusion: Cancer risk after IMRT for cervical and endometrial cancer is dependent on treatment energy. 6-MV pelvic IMRT represents a safe alternative with respect to SCR relative to 3DCRT, independently of the dose-response model. 18-MV IMRT produces second neutrons that modestly increase the SCR.« less

  8. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation.

    PubMed

    Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.

  9. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  10. Linear wide angle sun sensor for spinning satellites

    NASA Astrophysics Data System (ADS)

    Philip, M. P.; Kalakrishnan, B.; Jain, Y. K.

    1983-08-01

    A concept is developed which overcomes the defects of the nonlinearity of response and limitation in range exhibited by the V-slit, N-slit, and crossed slit sun sensors normally used for sun elevation angle measurements on spinning spacecraft. Two versions of sensors based on this concept which give a linear output and have a range of nearly + or - 90 deg of elevation angle are examined. Results are presented for the application of the twin slit version of the sun sensor in the three Indian satellites, Rohini, Apple, and Bhaskara II, which was successfully used for spin rate control and spin axis orientation control corrections as well as for sun elevation angle and spin period measurements.

  11. FAST Modularization Framework for Wind Turbine Simulation: Full-System Linearization

    DOE PAGES

    Jonkman, Jason M.; Jonkman, Bonnie J.

    2016-10-03

    The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well-established methods and tools for analyzing linear systems. Here, this paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.

  12. Linear systems on balancing chemical reaction problem

    NASA Astrophysics Data System (ADS)

    Kafi, R. A.; Abdillah, B.

    2018-01-01

    The concept of linear systems appears in a variety of applications. This paper presents a small sample of the wide variety of real-world problems regarding our study of linear systems. We show that the problem in balancing chemical reaction can be described by homogeneous linear systems. The solution of the systems is obtained by performing elementary row operations. The obtained solution represents the finding coefficients of chemical reaction. In addition, we present a computational calculation to show that mathematical software such as Matlab can be used to simplify completion of the systems, instead of manually using row operations.

  13. FAST modularization framework for wind turbine simulation: full-system linearization

    NASA Astrophysics Data System (ADS)

    Jonkman, J. M.; Jonkman, B. J.

    2016-09-01

    The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well- established methods and tools for analyzing linear systems. This paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.

  14. LCFIPlus: A framework for jet analysis in linear collider studies

    NASA Astrophysics Data System (ADS)

    Suehara, Taikan; Tanabe, Tomohiko

    2016-02-01

    We report on the progress in flavor identification tools developed for a future e+e- linear collider such as the International Linear Collider (ILC) and Compact Linear Collider (CLIC). Building on the work carried out by the LCFIVertex collaboration, we employ new strategies in vertex finding and jet finding, and introduce new discriminating variables for jet flavor identification. We present the performance of the new algorithms in the conditions simulated using a detector concept designed for the ILC. The algorithms have been successfully used in ILC physics simulation studies, such as those presented in the ILC Technical Design Report.

  15. Relationship between mathematical abstraction in learning parallel coordinates concept and performance in learning analytic geometry of pre-service mathematics teachers: an investigation

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2018-05-01

    As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.

  16. A Comparison Study between a Traditional and Experimental Program.

    ERIC Educational Resources Information Center

    Dogan, Hamide

    This paper is part of a dissertation defended in January 2001 as part of the author's Ph.D. requirement. The study investigated the effects of use of Mathematica, a computer algebra system, in learning basic linear algebra concepts, It was done by means of comparing two first year linear algebra classes, one traditional and one Mathematica…

  17. A Practical Approach to Inquiry-Based Learning in Linear Algebra

    ERIC Educational Resources Information Center

    Chang, J.-M.

    2011-01-01

    Linear algebra has become one of the most useful fields of mathematics since last decade, yet students still have trouble seeing the connection between some of the abstract concepts and real-world applications. In this article, we propose the use of thought-provoking questions in lesson designs to allow two-way communications between instructors…

  18. Aircraft automatic-flight-control system with inversion of the model in the feed-forward path using a Newton-Raphson technique for the inversion

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.; Nordstrom, M.

    1986-01-01

    A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.

  19. Lessons learned from the SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phinney, N.

    The SLAC Linear Collider (SLC) is the first example of an entirely new type of lepton collider. Many years of effort were required to develop the understanding and techniques needed to approach design luminosity. This paper discusses some of the key issues and problems encountered in producing a working linear collider. These include the polarized source, techniques for emittance preservation, extensive feedback systems, and refinements in beam optimization in the final focus. The SLC experience has been invaluable for testing concepts and developing designs for a future linear collider.

  20. Development of bioelectrical impedance-derived indices of fat and fat-free mass for assessment of nutritional status in childhood.

    PubMed

    Wright, C M; Sherriff, A; Ward, S C G; McColl, J H; Reilly, J J; Ness, A R

    2008-02-01

    (1) To develop a method of manipulating bioelectrical impedance (BIA) that gives indices of lean and fat adjusted for body size, using a large normative cohort of children. (2) To assess the discriminant validity of the method in a group of children likely to have abnormal body composition. Two prospective cohort studies. Normative data: Avon Longitudinal Study of Parents and Children (ALSPAC), population based cohort; proof of concept study: tertiary feeding clinic and special needs schools. Normative data: 7576 children measured aged between 7.25 and 8.25 (mean 7.5) (s.d.=0.2) years; proof of concept study: 29 children with either major neurodisability or receiving artificial feeding, or both, mean age 7.6 (s.d.=2) years. Leg-to-leg (Z (T)) and arm-to-leg (Z (B)) BIA, weight and height. Total body water (TBW) was estimated from the resistance index (RI=height(2)/Z), and fat-free mass was linearly related to TBW. Fat mass was obtained by subtracting fat-free mass from total weight. Fat-free mass was log-transformed and the reciprocal transform was taken for fat mass to satisfy parametric model assumptions. Lean and fat mass were then adjusted for height and age using multiple linear regression models. The resulting standardized residuals gave the lean index and fat index, respectively. In the normative cohort, the lean index was higher and fat index lower in boys. The lean index rose steeply to the middle of the normal range of body mass index (BMI) and then slowly for higher BMI values, whereas the fat index rose linearly through and above the normal range. In the proof of concept study, the children as a group had low lean indices (mean (s.d.) -1.5 (1.7)) with average fat indices (+0.21 (2.0)) despite relatively low BMI standard deviation scores (-0.60 (2.3)), but for any given BMI, individual children had extremely wide ranges of fat indices. The lean index proved more stable and repeatable than BMI. This clinical method of handling BIA reveals important variations in nutritional status that would not be detected using anthropometry alone. BIA used in this way would allow more accurate assessment of energy sufficiency in children with neurodisability and may provide a more valid identification of children at risk of underweight or obesity in field and clinical settings.

  1. Indoor radio channel modeling and mitigation of fading effects using linear and circular polarized antennas in combination for smart home system at 868 MHz

    NASA Astrophysics Data System (ADS)

    Wunderlich, S.; Welpot, M.; Gaspard, I.

    2014-11-01

    The markets for smart home products and services are expected to grow over the next years, driven by the increasing demands of homeowners considering energy monitoring, management, environmental controls and security. Many of these new systems will be installed in existing homes and offices and therefore using radio based systems for cost reduction. A drawback of radio based systems in indoor environments are fading effects which lead to a high variance of the received signal strength and thereby to a difficult predictability of the encountered path loss of the various communication links. For that reason it is necessary to derive a statistical path loss model which can be used to plan a reliable and cost effective radio network. This paper presents the results of a measurement campaign, which was performed in six buildings to deduce realistic radio channel models for a high variety of indoor radio propagation scenarios in the short range devices (SRD) band at 868 MHz. Furthermore, a potential concept to reduce the variance of the received signal strength using a circular polarized (CP) patch antenna in combination with a linear polarized antenna in an one-to-one communication link is presented.

  2. Multiple correlation analyses of metabolic and endocrine profiles with fertility in primiparous and multiparous cows.

    PubMed

    Wathes, D C; Bourne, N; Cheng, Z; Mann, G E; Taylor, V J; Coffey, M P

    2007-03-01

    Results from 4 studies were combined (representing a total of 500 lactations) to investigate the relationships between metabolic parameters and fertility in dairy cows. Information was collected on blood metabolic traits and body condition score at 1 to 2 wk prepartum and at 2, 4, and 7 wk postpartum. Fertility traits were days to commencement of luteal activity, days to first service, days to conception, and failure to conceive. Primiparous and multiparous cows were considered separately. Initial linear regression analyses were used to determine relationships among fertility, metabolic, and endocrine traits at each time point. All metabolic and endocrine traits significantly related to fertility were included in stepwise multiple regression analyses alone (model 1), including peak milk yield and interval to commencement of luteal activity (model 2), and with the further addition of dietary group (model 3). In multiparous cows, extended calving to conception intervals were associated prepartum with greater concentrations of leptin and lesser concentrations of nonesterified fatty acids and urea, and postpartum with reduced insulin-like growth factor-I at 2 wk, greater urea at 7 wk, and greater peak milk yield. In primiparous cows, extended calving to conception intervals were associated with more body condition and more urea prepartum, elevated urea postpartum, and more body condition loss by 7 wk. In conclusion, some metabolic measurements were associated with poorer fertility outcomes. Relationships between fertility and metabolic and endocrine traits varied both according to the lactation number of the cow and with the time relative to calving.

  3. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  4. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  5. Ground signature extrapolation of three-dimensional near-field CFD predictions for several HSCT configurations

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.

    1992-01-01

    A CFD analysis of the near-field sonic boom environment of several low boom High Speed Civilian Transport (HSCT) concepts is presented. The CFD method utilizes a multi-block Euler marching code within the context of an innovative mesh topology that allows for the resolution of shock waves several body lengths from the aircraft. Three-dimensional pressure footprints at one body length below three-different low boom aircraft concepts are presented. Models of two concepts designed by NASA to cruise at Mach 2 and Mach 3 were built and tested in the wind tunnel. The third concept was designed by Boeing to cruise at Mach 1.7. Centerline and sideline samples of these footprints are then extrapolated to the ground using a linear waveform parameter method to estimate the ground signatures or sonic boom ground overpressure levels. The Mach 2 concept achieved its centerline design signature but indicated higher sideline booms due to the outboard wing crank of the configuration. Nacelles are also included on two of NASA's low boom concepts. Computations are carried out for both flow-through nacelles and nacelles with engine exhaust simulation. The flow-through nacelles with the assumption of zero spillage and zero inlet lip radius showed very little effect on the sonic boom signatures. On the other hand, it was shown that the engine exhaust plumes can have an effect on the levels of overpressure reaching the ground depending on the engine operating conditions. The results of this study indicate that engine integration into a low boom design should be given some attention.

  6. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    PubMed

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  7. A new method of estimating thermal performance of embryonic development rate yields accurate prediction of embryonic age in wild reptile nests.

    PubMed

    Rollinson, Njal; Holt, Sarah M; Massey, Melanie D; Holt, Richard C; Nancekivell, E Graham; Brooks, Ronald J

    2018-05-01

    Temperature has a strong effect on ectotherm development rate. It is therefore possible to construct predictive models of development that rely solely on temperature, which have applications in a range of biological fields. Here, we leverage a reference series of development stages for embryos of the turtle Chelydra serpentina, which was described at a constant temperature of 20 °C. The reference series acts to map each distinct developmental stage onto embryonic age (in days) at 20 °C. By extension, an embryo taken from any given incubation environment, once staged, can be assigned an equivalent age at 20 °C. We call this concept "Equivalent Development", as it maps the development stage of an embryo incubated at a given temperature to its equivalent age at a reference temperature. In the laboratory, we used the concept of Equivalent Development to estimate development rate of embryos of C. serpentina across a series of constant temperatures. Using these estimates of development rate, we created a thermal performance curve measured in units of Equivalent Development (TPC ED ). We then used the TPC ED to predict developmental stage of embryos in several natural turtle nests across six years. We found that 85% of the variation of development stage in natural nests could be explained. Further, we compared the predictive accuracy of the model based on the TPC ED to the predictive accuracy of a degree-day model, where development is assumed to be linearly related to temperature and the amount of accumulated heat is summed over time. Information theory suggested that the model based on the TPC ED better describes variation in developmental stage in wild nests than the degree-day model. We suggest the concept of Equivalent Development has several strengths and can be broadly applied. In particular, studies on temperature-dependent sex determination may be facilitated by the concept of Equivalent Development, as development age maps directly onto the developmental series of the organism, allowing critical periods of sex determination to be delineated without invasive sampling, even under fluctuating temperature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Noninvasive Classification of Hepatic Fibrosis Based on Texture Parameters From Double Contrast-Enhanced Magnetic Resonance Images

    PubMed Central

    Bahl, Gautam; Cruite, Irene; Wolfson, Tanya; Gamst, Anthony C.; Collins, Julie M.; Chavez, Alyssa D.; Barakat, Fatma; Hassanein, Tarek; Sirlin, Claude B.

    2016-01-01

    Purpose To demonstrate a proof of concept that quantitative texture feature analysis of double contrast-enhanced magnetic resonance imaging (MRI) can classify fibrosis noninvasively, using histology as a reference standard. Materials and Methods A Health Insurance Portability and Accountability Act (HIPAA)-compliant Institutional Review Board (IRB)-approved retrospective study of 68 patients with diffuse liver disease was performed at a tertiary liver center. All patients underwent double contrast-enhanced MRI, with histopathology-based staging of fibrosis obtained within 12 months of imaging. The MaZda software program was used to compute 279 texture parameters for each image. A statistical regularization technique, generalized linear model (GLM)-path, was used to develop a model based on texture features for dichotomous classification of fibrosis category (F ≤2 vs. F ≥3) of the 68 patients, with histology as the reference standard. The model's performance was assessed and cross-validated. There was no additional validation performed on an independent cohort. Results Cross-validated sensitivity, specificity, and total accuracy of the texture feature model in classifying fibrosis were 91.9%, 83.9%, and 88.2%, respectively. Conclusion This study shows proof of concept that accurate, noninvasive classification of liver fibrosis is possible by applying quantitative texture analysis to double contrast-enhanced MRI. Further studies are needed in independent cohorts of subjects. PMID:22851409

  9. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.

  10. Linear Cowden nevus: a new distinct epidermal nevus.

    PubMed

    Happle, Rudolf

    2007-01-01

    Within the group of epidermal nevi, a so far nameless disorder is described under the term "linear Cowden nevus". This non-organoid epidermal nevus is caused by loss of heterozygosity, occurring at an early developmental stage in an embryo with a germline PTEN mutation, giving rise to Cowden disease. Hence, linear Cowden nevus can be categorized as a characteristic feature of type 2 segmental Cowden disease. Until now, several authors had mistaken this epidermal nevus as a manifestation of Proteus syndrome. The concept of linear Cowden nevus implies that Proteus syndrome is by no means caused by PTEN mutations. As a clinical difference, linear Cowden nevus is markedly papillomatous and thick, whereas linear Proteus nevus tends to be rather flat. Moreover, the spectrum of possibly associated cutaneous or extracutaneous anomalies differs in the two types of nevi. In conclusion, linear Cowden nevus, that may also be called "linear PTEN nevus", represents a distinct clinicogenetic entity.

  11. The effect of fertility treatment on adverse perinatal outcomes in women aged at least 40 years.

    PubMed

    Harlev, Avi; Walfisch, Asnat; Oran, Eynan; Har-Vardi, Iris; Friger, Michael; Lunenfeld, Eitan; Levitas, Eliahu

    2018-01-01

    To compare perinatal outcomes between spontaneous conception and assisted reproductive technologies (ART) among patients of advanced maternal age. The present retrospective study included data from singleton pregnancies of women aged at least 40 years who delivered between January 1, 1991, and December 31, 2013, at Soroka University Medical Center, Beer Sheva, Israel. Demographic, obstetric, and perinatal data were compared between pregnancies conceived with ART (in vitro fertilization [IVF] or ovulation induction) and those conceived spontaneously. Multiple regression models were used to define independent predictors of adverse outcomes. A total of 8244 singleton pregnancies were included; 229 (2.8%) following IVF, 86 (1.0%) following ovulation induction, and 7929 (96.2%) were spontaneous. Preterm delivery (P<0.001), fetal growth restriction (FGR) (P<0.001), and cesarean delivery (P<0.001) demonstrated linear associations with the conception mode; the highest rates for each were observed for IVF, with decreased rates for ovulation induction and spontaneous conception. The incidence of gestational diabetes and hypertensive disorders were highest among pregnancies following ART. No association was observed between conception mode and perinatal mortality. Multivariate logistic regression demonstrated that IVF was independently associated with increased odds of preterm delivery (P<0.001) and FGR (P=0.027) compared with spontaneous conception. Among patients of advanced maternal age, ART were independently associated with increased FGR and preterm delivery rates compared with spontaneous pregnancies; perinatal mortality was comparable. © 2017 International Federation of Gynecology and Obstetrics.

  12. Climatic vs. tectonic control on glacial relief

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Herman, Frederic; Robl, Jörg

    2017-04-01

    The limiting effect of a climatically-induced glacial buzz-saw on the height of mountain ranges has been extensively discussed in the geosciences. The buzz-saw concept assumes that solely climate controls the amount of topography present above the equilibrium line altitude (ELA), while the rock uplift rate plays no relevant role. This view is supported by analyses of hypsometric patterns in orogens worldwide. Furthermore, numerical landscape evolution models show that glacial erosion modifies the hypsometry and reduces the overall relief of mountain landscapes. However, such models often do not incorporate tectonic uplift and can only simulate glacial erosion over a limited amount of time, typically one or several glacial cycles. Constraints on glacial end-member landscapes from analytical, time-independent models are widely lacking. Here we present a steady-state solution for a glacier equilibrium profile in an active orogen modified from the mathematical conception presented by Headley et al. (2012). Our approach combines a glacial erosion law with the shallow ice approximation, specifically the formulations of ice sliding and deformation velocities and ice flux, to calculate ice surface and bed topography from prescribed specific mass balance and rock uplift rate. This solution allows the application of both linear and non-linear erosion laws and can be iteratively fitted to a predefined gradient of specific mass balance with elevation. We tested the influence of climate (fixed rock uplift rate, different ELAs) and tectonic forcing (fixed ELA, different rock uplift rates) on steady-state relief. Our results show that, similar to fluvial orogens, both climate and rock uplift rate exert a strong influence on glacial relief and that the relation among rock uplift rate and relief is governed by the glacial erosion law. This finding can provide an explanation for the presence of high relief in high latitudes. Headley, R.M., Roe, G., Hallet, B., 2012. Glacier longitudinal profiles in regions of active uplift. Earth and Planetary Science Letters, 317-318, 354-362.

  13. An emergentist vs a linear approach to social change processes: a gender look in contemporary India between modernity and Hindu tradition.

    PubMed

    Condorelli, Rosalia

    2015-01-01

    Using Census of India data from 1901 to 2011 and national and international reports on women's condition in India, beginning with sex ratio trends according to regional distribution up to female infanticides and sex-selective abortions and dowry deaths, this study examines the sociological aspects of the gender imbalance in modern contemporary India. Gender inequality persistence in India proves that new values and structures do not necessarily lead to the disappearance of older forms, but they can co-exist with mutual adaptations and reinforcements. Data analysis suggests that these unexpected combinations are not comprehensible in light of a linear concept of social change which is founded, in turn, on a concept of social systems as linear interaction systems that relate to environmental perturbations according to proportional cause and effect relationships. From this perspective, in fact, behavioral attitudes and interaction relationships should be less and less proportionally regulated by traditional values and practices as exposure to modernizing influences increases. And progressive decreases should be found in rates of social indicators of gender inequality like dowry deaths (the inverse should be found in sex ratio trends). However, data does not confirm these trends. This finding leads to emphasize a new theoretical and methodological approach toward social systems study, namely the conception of social systems as complex adaptive systems and the consequential emergentist, nonlinear conception of social change processes. Within the framework of emergentist theory of social change is it possible to understand the lasting strength of the patriarchal tradition and its problematic consequences in the modern contemporary India.

  14. Supervised linear dimensionality reduction with robust margins for object recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  15. On the derivation of linear irreversible thermodynamics for classical fluids

    PubMed Central

    Theodosopulu, M.; Grecos, A.; Prigogine, I.

    1978-01-01

    We consider the microscopic derivation of the linearized hydrodynamic equations for an arbitrary simple fluid. Our discussion is based on the concept of hydrodynamical modes, and use is made of the ideas and methods of the theory of subdynamics. We also show that this analysis leads to the Gibbs relation for the entropy of the system. PMID:16592516

  16. Development as a Complex Process of Change: Conception and Analysis of Projects, Programs and Policies

    ERIC Educational Resources Information Center

    Nordtveit, Bjorn Harald

    2010-01-01

    Development is often understood as a linear process of change towards Western modernity, a vision that is challenged by this paper, arguing that development efforts should rather be connected to the local stakeholders' sense of their own development. Further, the paper contends that Complexity Theory is more effective than a linear theory of…

  17. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  18. Universal Linear Motor Driven Leg Press Dynamometer and Concept of Serial Stretch Loading.

    PubMed

    Hamar, Dušan

    2015-08-24

    Paper deals with backgrounds and principles of universal linear motor driven leg press dynamometer and concept of serial stretch loading. The device is based on two computer controlled linear motors mounted to the horizontal rails. As the motors can keep either constant resistance force in selected position or velocity in both directions, the system allows simulation of any mode of muscle contraction. In addition, it also can generate defined serial stretch stimuli in a form of repeated force peaks. This is achieved by short segments of reversed velocity (in concentric phase) or acceleration (in eccentric phase). Such stimuli, generated at the rate of 10 Hz, have proven to be a more efficient means for the improvement of rate of the force development. This capability not only affects performance in many sports, but also plays a substantial role in prevention of falls and their consequences. Universal linear motor driven and computer controlled dynamometer with its unique feature to generate serial stretch stimuli seems to be an efficient and useful tool for enhancing strength training effects on neuromuscular function not only in athletes, but as well as in senior population and rehabilitation patients.

  19. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  20. The Concept of Leadership Programmed

    DTIC Science & Technology

    1973-01-01

    should include the forepoing statement.) Itilj. ,,, THIS DOCUMENT IS BEST QUALITY AVAILABLE. THE COPY FURNISHED TO DTIC CONTAINED A SIGNIFICANT...instruction media . The concept of leadership was programmed in a constructed- response, linear format containing 129 frames. This self-instructional...just remved from his teens . In the combat setting the young rifle platoon leader will be exposed for the first time to many strenuous situations and

  1. Proceedings of the International Cryocooler Conference (7th) Held in Santa Fe, New Mexico on 17-19 November 1992. Part 4

    DTIC Science & Technology

    1993-04-01

    and Long Life Applications, Stirling Cryocoolers , Pulse Tube Refrigerators, Novel Concepts and Component D)evclopment, Low Temperature Regenerator... Stirling Cryocoolers , Pulse Tube Refrigerators, Novel Concepts and Component Development, Low Temperature Regenerator Development, and J-T and...213 LINEARIZED PULSE TUBE CRYOCOOLER THEORY ....H . M ilels .. .... ...................... ..... ...... ....... ......... 22 1

  2. Backward Transfer: An Investigation of the Influence of Quadratic Functions Instruction on Students' Prior Ways of Reasoning about Linear Functions

    ERIC Educational Resources Information Center

    Hohensee, Charles

    2014-01-01

    The transfer of learning has been the subject of much scientific inquiry in the social sciences. However, mathematics education research has given little attention to a subclass called backward transfer, which is when learning about new concepts influences learners' ways of reasoning about previously encountered concepts. This study examined when…

  3. Effects of Prior Knowledge and Concept-Map Structure on Disorientation, Cognitive Load, and Learning

    ERIC Educational Resources Information Center

    Amadieu, Franck; van Gog, Tamara; Paas, Fred; Tricot, Andre; Marine, Claudette

    2009-01-01

    This study explored the effects of prior knowledge (high vs. low; HPK and LPK) and concept-map structure (hierarchical vs. network; HS and NS) on disorientation, cognitive load, and learning from non-linear documents on "the infection process of a retrograde virus (HIV)". Participants in the study were 24 adults. Overall subjective ratings of…

  4. An Entertaining Method of Teaching Concepts of Linear Light Propagation, Reflection and Refraction Using a Simple Optical Mechanism

    ERIC Educational Resources Information Center

    Yurumezoglu, K.

    2009-01-01

    An activity has been designed for the purpose of teaching how light is dispersed in a straight line and about the interaction between matter and light as well as the related concepts of shadows, partial shadows, reflection, refraction, primary colours and complementary (secondary) colours, and differentiating the relationship between colours, all…

  5. Development of Thermally Actuated, High-Temperature Composite Morphing Concepts

    DTIC Science & Technology

    2016-05-11

    Thermally Actuated, High- Temperature Composite Morphing Concepts 5a. CONTRACT NUMBER EOARD 14-0063 5b. GRANT NUMBER FA9550-14-1-0063 5c...mismatched thermal expansion coefficients. However, current bimorphs are generally limited to benign temperatures and linear temperature displacement... temperature morphing structures. Successful application of this work may yield morphing hot structures in extreme environments. A particularly appealing

  6. Development of Thermally Actuated, High Temperature Composite Morphing Concepts

    DTIC Science & Technology

    2016-03-31

    Thermally Actuated, High- Temperature Composite Morphing Concepts 5a. CONTRACT NUMBER EOARD 14-0063 5b. GRANT NUMBER FA9550-14-1-0063 5c...mismatched thermal expansion coefficients. However, current bimorphs are generally limited to benign temperatures and linear temperature displacement... temperature morphing structures. Successful application of this work may yield morphing hot structures in extreme environments. A particularly appealing

  7. A KLM-circuit model of a multi-layer transducer for acoustic bladder volume measurements.

    PubMed

    Merks, E J W; Borsboom, J M G; Bom, N; van der Steen, A F W; de Jong, N

    2006-12-22

    In a preceding study a new technique to non-invasively measure the bladder volume on the basis of non-linear wave propagation was validated. It was shown that the harmonic level generated at the posterior bladder wall increases for larger bladder volumes. A dedicated transducer is needed to further verify and implement this approach. This transducer must be capable of both transmission of high-pressure waves at fundamental frequency and reception of up to the third harmonic. For this purpose, a multi-layer transducer was constructed using a single element PZT transducer for transmission and a PVDF top-layer for reception. To determine feasibility of the multi-layer concept for bladder volume measurements, and to ensure optimal performance, an equivalent mathematical model on the basis of KLM-circuit modeling was generated. This model was obtained in two subsequent steps. Firstly, the PZT transducer was modeled without PVDF-layer attached by means of matching the model with the measured electrical input impedance. It was validated using pulse-echo measurements. Secondly, the model was extended with the PVDF-layer. The total model was validated by considering the PVDF-layer as a hydrophone on the PZT transducer surface and comparing the measured and simulated PVDF responses on a wave transmitted by the PZT transducer. The obtained results indicated that a valid model for the multi-layer transducer was constructed. The model showed feasibility of the multi-layer concept for bladder volume measurements. It also allowed for further optimization with respect to electrical matching and transmit waveform. Additionally, the model demonstrated the effect of mechanical loading of the PVDF-layer on the PZT transducer.

  8. The importance of Thermo-Hydro-Mechanical couplings and microstructure to strain localization in 3D continua with application to seismic faults. Part I: Theory and linear stability analysis

    NASA Astrophysics Data System (ADS)

    Rattez, Hadrien; Stefanou, Ioannis; Sulem, Jean

    2018-06-01

    A Thermo-Hydro-Mechanical (THM) model for Cosserat continua is developed to explore the influence of frictional heating and thermal pore fluid pressurization on the strain localization phenomenon. A general framework is presented to conduct a bifurcation analysis for elasto-plastic Cosserat continua with THM couplings and predict the onset of instability. The presence of internal lengths in Cosserat continua enables to estimate the thickness of the localization zone. This is done by performing a linear stability analysis of the system and looking for the selected wavelength corresponding to the instability mode with fastest finite growth coefficient. These concepts are applied to the study of fault zones under fast shearing. For doing so, we consider a model of a sheared saturated infinite granular layer. The influence of THM couplings on the bifurcation state and the shear band width is investigated. Taking representative parameters for a centroidal fault gouge, the evolution of the thickness of the localized zone under continuous shear is studied. Furthermore, the effect of grain crushing inside the shear band is explored by varying the internal length of the constitutive law.

  9. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  10. Optical solitons and modulation instability analysis with (3 + 1)-dimensional nonlinear Shrödinger equation

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi; Baleanu, Dumitru

    2017-12-01

    This paper addresses the (3 + 1)-dimensional nonlinear Shrödinger equation (NLSE) that serves as the model to study the propagation of optical solitons through nonlinear optical fibers. Two integration schemes are employed to study the equation. These are the complex envelope function ansatz and the solitary wave ansatz with Jaccobi elliptic function methods, we present the exact dark, bright and dark-bright or combined optical solitons to the model. The intensity as well as the nonlinear phase shift of the solitons are reported. The modulation instability aspects are discussed using the concept of linear stability analysis. The MI gain is got. Numerical simulation of the obtained results are analyzed with interesting figures showing the physical meaning of the solutions.

  11. Dynamic analysis of the American Maglev system. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seda-Sanabria, Y.; Ray, J.C.

    1996-06-01

    Understanding the dynamic interaction between a magnetic levitated (Maglev) vehicle and its supporting guideway is essential in the evaluation of the performance of such a system. This interacting coupling, known as vehicle/guideway interaction (VGI), has a significant effect on system parameters such as the required magnetic suspension forces and gaps, vehicular ride quality, and guideway deflections and stresses. This report presents the VGI analyses conducted on an actual Maglev system concept definition (SCD), the American Maglev SCD, using a linear-elastic finite-element (FE) model. Particular interest was focused on the comparison of the ride quality of the vehicle, using two differentmore » suspension systems, and their effect on the guideway structure. The procedure and necessary assumptions in the modeling are discussed.« less

  12. Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.

    PubMed

    Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter

    2012-08-01

    An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Compensation of significant parametric uncertainties using sliding mode online learning

    NASA Astrophysics Data System (ADS)

    Schnetter, Philipp; Kruger, Thomas

    An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.

  14. Knowledge and attitudes of mental health professionals in Ireland to the concept of recovery in mental health: a questionnaire survey.

    PubMed

    Cleary, A; Dowling, M

    2009-08-01

    Recovery is the model of care presently advocated for mental health services internationally. The aim of this study was to examine the knowledge and attitudes of mental health professionals to the concept of recovery in mental health. A descriptive survey approach was adopted, and 153 health care professionals (nurses, doctors, social workers, occupational therapists and psychologists) completed an adapted version of the Recovery Knowledge Inventory. The respondents indicated their positive approach to the adoption of recovery as an approach to care in the delivery of mental health services. However, respondents were less comfortable in encouraging healthy risk taking with service users. This finding is important because therapeutic risk taking and hope are essential aspects in the creation of a care environment that promotes recovery. Respondents were also less familiar with the non-linearity of the recovery process and placed greater emphasis on symptom management and compliance with treatment. Multidisciplinary mental health care teams need to examine their attitudes and approach to a recovery model of care. The challenge for the present and into the future is to strive to equip professionals with the necessary skills in the form of information and training.

  15. Investigation of PVdF active diaphragms for synthetic jets

    NASA Astrophysics Data System (ADS)

    Bailo, Kelly C.; Brei, Diann E.; Calkins, Frederick T.

    2000-06-01

    Current research has shown that aircraft can gain significant aerodynamic performance benefits by employing active flow control (AFC). One of the enabling technologies of AFC is the synthetic jet. Synthetic jets, also known as zero-net-mass flux actuators, act as bi-directional pumps injecting high momentum air into the local aerodynamic flow. Previous work has concentrated on high frequency synthetic jets based on piezoelectric active diaphragms such as Thunder actuators. Low frequency synthetic jets present a unique challenge requiring large displacements, which current technology has difficulty meeting. Boeing is investigating novel shaped low frequency synthetic jets that can modify the flow over fixed aircraft wings. This paper present the initial study of two promising active diaphragm concepts: a crescent shape and an opposing bender shape. These active diaphragms were numerically modeled utilizing the general-purpose finite element code ABAQUS. Using the ABAQUS results, the dynamic volume change within each jet was calculated and incorporated into an analytical linear Bernoulli model to predict the velocities and pressures at the nozzle. Simulations were performed to determine trends to assist in selection of prototype configurations. Prototypes of both diaphragm concepts were constructed from polyvinylidene fluoride and experimentally tested at Boeing with promising results.

  16. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J.

    This paper assesses historical reasons that may account for the marginalization of hormesis as a dose-response model in the biomedical sciences in general and toxicology in particular. The most significant and enduring explanatory factors are the early and close association of the concept of hormesis with the highly controversial medical practice of homeopathy and the difficulty in assessing hormesis with high-dose testing protocols which have dominated the discipline of toxicology, especially regulatory toxicology. The long-standing and intensely acrimonious conflict between homeopathy and 'traditional' medicine (allopathy) lead to the exclusion of the hormesis concept from a vast array of medical- andmore » public health-related activities including research, teaching, grant funding, publishing, professional societal meetings, and regulatory initiatives of governmental agencies and their advisory bodies. Recent publications indicate that the hormetic dose-response is far more common and fundamental than the dose-response models [threshold/linear no threshold (LNT)] used in toxicology and risk assessment, and by governmental regulatory agencies in the establishment of exposure standards for workers and the general public. Acceptance of the possibility of hormesis has the potential to profoundly affect the practice of toxicology and risk assessment, especially with respect to carcinogen assessment.« less

  18. The modified unified interaction model: incorporation of dose-dependent localised recombination.

    PubMed

    Lavon, A; Eliyahu, I; Oster, L; Horowitz, Y S

    2015-02-01

    The unified interaction model (UNIM) was developed to simulate thermoluminescence (TL) linear/supralinear dose-response and the dependence of the supralinearity on ionisation density, i.e. particle type and energy. Before the development of the UNIM, this behaviour had eluded all types of TL modelling including conduction band/valence band (CB/VB) kinetic models. The dependence of the supralinearity on photon energy was explained in the UNIM as due to the increasing role of geminate (localised recombination) with decreasing photon/electron energy. Recently, the Ben Gurion University group has incorporated the concept of trapping centre/luminescent centre (TC/LC) spatially correlated complexes and localised/delocalised recombination into the CB/VB kinetic modelling of the LiF:Mg,Ti system. Track structure considerations are used to describe the relative population of the TC/LC complexes by an electron-hole or by an electron-only as a function of both photon/electron energy and dose. The latter dependence was not included in the original UNIM formulation, a significant over-simplification that is herein corrected. The modified version, the M-UNIM, is then applied to the simulation of the linear/supralinear dose-response characteristics of composite peak 5 in the TL glow curve of LiF:Mg,Ti at two representative average photon/electron energies of 500 and 8 keV. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Engineering online and in-person social networks to sustain physical activity: application of a conceptual model

    PubMed Central

    2013-01-01

    Background High rates of physical inactivity compromise the health status of populations globally. Social networks have been shown to influence physical activity (PA), but little is known about how best to engineer social networks to sustain PA. To improve procedures for building networks that shape PA as a normative behavior, there is a need for more specific hypotheses about how social variables influence PA. There is also a need to integrate concepts from network science with ecological concepts that often guide the design of in-person and electronically-mediated interventions. Therefore, this paper: (1) proposes a conceptual model that integrates principles from network science and ecology across in-person and electronically-mediated intervention modes; and (2) illustrates the application of this model to the design and evaluation of a social network intervention for PA. Methods/Design A conceptual model for engineering social networks was developed based on a scoping literature review of modifiable social influences on PA. The model guided the design of a cluster randomized controlled trial in which 308 sedentary adults were randomly assigned to three groups: WalkLink+: prompted and provided feedback on participants’ online and in-person social-network interactions to expand networks for PA, plus provided evidence-based online walking program and weekly walking tips; WalkLink: evidence-based online walking program and weekly tips only; Minimal Treatment Control: weekly tips only. The effects of these treatment conditions were assessed at baseline, post-program, and 6-month follow-up. The primary outcome was accelerometer-measured PA. Secondary outcomes included objectively-measured aerobic fitness, body mass index, waist circumference, blood pressure, and neighborhood walkability; and self-reported measures of the physical environment, social network environment, and social network interactions. The differential effects of the three treatment conditions on primary and secondary outcomes will be analyzed using general linear modeling (GLM), or generalized linear modeling if the assumptions for GLM cannot be met. Discussion Results will contribute to greater understanding of how to conceptualize and implement social networks to support long-term PA. Establishing social networks for PA across multiple life settings could contribute to cultural norms that sustain active living. Trial registration ClinicalTrials.gov NCT01142804 PMID:23945138

  20. A Feeder-Bus Dispatch Planning Model for Emergency Evacuation in Urban Rail Transit Corridors

    PubMed Central

    Wang, Yun; Yan, Xuedong; Zhou, Yu; Zhang, Wenyi

    2016-01-01

    The mobility of modern metropolises strongly relies on urban rail transit (URT) systems, and such a heavy dependence causes that even minor service interruptions would make the URT systems unsustainable. This study aims at optimally dispatching the ground feeder-bus to coordinate with the urban rails’ operation for eliminating the effect of unexpected service interruptions in URT corridors. A feeder-bus dispatch planning model was proposed for the collaborative optimization of URT and feeder-bus cooperation under emergency situations and minimizing the total evacuation cost of the feeder-buses. To solve the model, a concept of dummy feeder-bus system is proposed to transform the non-linear model into traditional linear programming (ILP) model, i.e., traditional transportation problem. The case study of Line #2 of Nanjing URT in China was adopted to illustrate the model application and sensitivity analyses of the key variables. The modeling results show that as the evacuation time window increases, the total evacuation cost as well as the number of dispatched feeder-buses decrease, and the dispatched feeder-buses need operate for more times along the feeder-bus line. The number of dispatched feeder-buses does not show an obvious change with the increase of parking spot capacity and time window, indicating that simply increasing the parking spot capacity would cause huge waste for the emergent bus utilization. When the unbalanced evacuation demand exists between stations, the more feeder-buses are needed. The method of this study will contribute to improving transportation emergency management and resource allocation for URT systems. PMID:27676179

  1. A Thermodynamic Approach to Soil-Plant-Atmosphere Modeling: From Metabolic Biochemical Processes to Water-Carbon-Nitrogen Balance

    NASA Astrophysics Data System (ADS)

    Clavijo, H. W.

    2016-12-01

    Modeling the soil-plant-atmosphere continuum has been central part of understanding interrelationships among biogeochemical and hydrological processes. Theory behind of couplings Land Surface Models (LSM) and Dynamical Global Vegetation Models (DGVM) are based on physical and physiological processes connected by input-output interactions mainly. This modeling framework could be improved by the application of non-equilibrium thermodynamic basis that could encompass the majority of biophysical processes in a standard fashion. This study presents an alternative model for plant-water-atmosphere based on energy-mass thermodynamics. The system of dynamic equations derived is based on the total entropy, the total energy balance for the plant, the biomass dynamics at metabolic level and the water-carbon-nitrogen fluxes and balances. One advantage of this formulation is the capability to describe adaptation and evolution of dynamics of plant as a bio-system coupled to the environment. Second, it opens a window for applications on specific conditions from individual plant scale, to watershed scale, to global scale. Third, it enhances the possibility of analyzing anthropogenic impacts on the system, benefiting from the mathematical formulation and its non-linearity. This non-linear model formulation is analyzed under the concepts of qualitative system dynamics theory, for different state-space phase portraits. The attractors and sources are pointed out with its stability analysis. Possibility of bifurcations are explored and reported. Simulations for the system dynamics under different conditions are presented. These results show strong consistency and applicability that validates the use of the non-equilibrium thermodynamic theory.

  2. Hydrographic Basins Analysis Using Digital Terrain Modelling

    NASA Astrophysics Data System (ADS)

    Mihaela, Pişleagă; -Minda Codruţa, Bădăluţă; Gabriel, Eleş; Daniela, Popescu

    2017-10-01

    The paper, emphasis the link between digital terrain modelling and studies of hydrographic basins, concerning the hydrological processes analysis. Given the evolution of computing techniques but also of the software digital terrain modelling made its presence felt increasingly, and established itself as a basic concept in many areas, due to many advantages. At present, most digital terrain modelling is derived from three alternative sources such as ground surveys, photogrammetric data capture or from digitized cartographic sources. A wide range of features may be extracted from digital terrain models, such as surface, specific points and landmarks, linear features but also areal futures like drainage basins, hills or hydrological basins. The paper highlights how the use appropriate software for the preparation of a digital terrain model, a model which is subsequently used to study hydrographic basins according to various geomorphological parameters. As a final goal, it shows the link between digital terrain modelling and hydrographic basins study that can be used to optimize the correlation between digital model terrain and hydrological processes in order to obtain results as close to the real field processes.

  3. Full-field fan-beam x-ray fluorescence computed tomography system design with linear-array detectors and pinhole collimation: a rapid Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Zhang, Siyuan; Li, Liang; Li, Ruizhe; Chen, Zhiqiang

    2017-11-01

    We present the design concept and initial simulations for a polychromatic full-field fan-beam x-ray fluorescence computed tomography (XFCT) device with pinhole collimators and linear-array photon counting detectors. The phantom is irradiated by a fan-beam polychromatic x-ray source filtered by copper. Fluorescent photons are stimulated and then collected by two linear-array photon counting detectors with pinhole collimators. The Compton scatter correction and the attenuation correction are applied in the data processing, and the maximum-likelihood expectation maximization algorithm is applied for the image reconstruction of XFCT. The physical modeling of the XFCT imaging system was described, and a set of rapid Monte Carlo simulations was carried out to examine the feasibility and sensitivity of the XFCT system. Different concentrations of gadolinium (Gd) and gold (Au) solutions were used as contrast agents in simulations. Results show that 0.04% of Gd and 0.065% of Au can be well reconstructed with the full scan time set at 6 min. Compared with using the XFCT system with a pencil-beam source or a single-pixel detector, using a full-field fan-beam XFCT device with linear-array detectors results in significant scanning time reduction and may satisfy requirements of rapid imaging, such as in vivo imaging experiments.

  4. Thermodynamics of nuclear track chemical etching

    NASA Astrophysics Data System (ADS)

    Rana, Mukhtar Ahmed

    2018-05-01

    This is a brief paper with new and useful scientific information on nuclear track chemical etching. Nuclear track etching is described here by using basic concepts of thermodynamics. Enthalpy, entropy and free energy parameters are considered for the nuclear track etching. The free energy of etching is determined using etching experiments of fission fragment tracks in CR-39. Relationship between the free energy and the etching temperature is explored and is found to be approximately linear. The above relationship is discussed. A simple enthalpy-entropy model of chemical etching is presented. Experimental and computational results presented here are of fundamental interest in nuclear track detection methodology.

  5. Materials and construction techniques for cryogenic wind tunnel facilities for instruction/research use

    NASA Technical Reports Server (NTRS)

    Morse, S. F.; Roper, A. T.

    1975-01-01

    The results of the cryogenic wind tunnel program conducted at NASA Langley Research Center are presented to provide a starting point for the design of an instructional/research wind tunnel facility. The advantages of the cryogenic concept are discussed, and operating envelopes for a representative facility are presented to indicate the range and mode of operation. Special attention is given to the design, construction and materials problems peculiar to cryogenic wind tunnels. The control system for operation of a cryogenic tunnel is considered, and a portion of a linearized mathematical model is developed for determining the tunnel dynamic characteristics.

  6. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  7. An introduction to analyzing dichotomous outcomes in a longitudinal setting: a NIDRR traumatic brain injury model systems communication.

    PubMed

    Pretz, Christopher R; Ketchum, Jessica M; Cuthbert, Jeffery P

    2014-01-01

    An untapped wealth of temporal information is captured within the Traumatic Brain Injury Model Systems National Database. Utilization of appropriate longitudinal analyses can provide an avenue toward unlocking the value of this information. This article highlights 2 statistical methods used for assessing change over time when examination of noncontinuous outcomes is of interest where this article focuses on investigation of dichotomous responses. Specifically, the intent of this article is to familiarize the rehabilitation community with the application of generalized estimating equations and generalized linear mixed models as used in longitudinal studies. An introduction to each method is provided where similarities and differences between the 2 are discussed. In addition, to reinforce the ideas and concepts embodied in each approach, we highlight each method, using examples based on data from the Rocky Mountain Regional Brain Injury System.

  8. An Affect-Centered Model of the Psyche and its Consequences for a New Understanding of Nonlinear Psychodynamics

    NASA Astrophysics Data System (ADS)

    Ciompi, Luc

    At variance with a purely cognitivistic approach, an affect-centered model of mental functioning called `fractal affect-logic' is presented on the basis of current emotional-psychological and neurobiological research. Functionally integrated feeling-thinking-behaving programs generated by action appear in this model as the basic `building blocks' of the psyche. Affects are understood as the essential source of energy that mobilises and organises both linear and nonlinear affective-cognitive dynamics, under the influence of appropriate control parameters and order parameters. Global patterns of affective-cognitive functioning form dissipative structures in the sense of Prigogine, with affect-specific attractors and repulsors, bifurcations, high sensitivity for initial conditions and a fractal overall structure that may be represented in a complex potential landscape of variable configuration. This concept opens new possibilities of understanding normal and pathological psychodynamics and sociodynamics, with numerous practical and theoretical implications.

  9. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  10. Identification and stochastic control of helicopter dynamic modes

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.; Bar-Shalom, Y.

    1983-01-01

    A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.

  11. Social modernization and the increase in the divorce rate.

    PubMed

    Esser, H

    1993-03-01

    The author develops a micro-model of marital interactions that is used to analyze factors affecting the divorce rate in modern industrialized societies. The core of the model is the concept of production of marital gain and mutual control of this production. "The increase of divorce rates, then, is explained by a steady decrease of institutional and social embeddedness, which helps to solve this kind of an 'assurance game.' The shape of the individual risk is explained by the typical form of change of the 'production functions' of marriages within the first period of adaptation. The inconsistent results concerning womens' labor market participation in linear regression models are explained as a consequence of the (theoretical and statistical) 'interaction' of decreases in embeddedness and increases in external alternatives for women." Comments are included by Karl-Dieter Opp (pp. 278-82) and Ulrich Witt (pp. 283-5). excerpt

  12. Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routine

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Endreny, Theodore A.; Hassett, James M.

    2006-11-01

    TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4.8 cm h-1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright

  13. ACCELERATED FITTING OF STELLAR SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less

  14. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  15. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  16. Analysis of the conceptions and expectations of students in the courses of pedagogy, administration and human resources about the discipline of science, technology and society

    NASA Astrophysics Data System (ADS)

    de Souza, Alexandre; de Oliveira Neves, Jobert; Ferreira, Orlando Rodrigues; Lúcia Costa Amaral, Carmem; Delourdes Maciel, Maria; Voelzke, Marcos Rincon; Nascimento, Rômulo Pereira

    2012-10-01

    Provided for the education curricula since 1960, the focus on Science, Technology and Society (STS) has been poorly implemented even until today. Set as a goal to be achieved at all levels of education by 2014, in Brazil it is necessary to undertake specific actions in pursuit of putting into practice what has been stalled over the years in Education. As a result of joint efforts of teachers and students of the Masters in Teaching Science and Mathematics at the Universidade Cruzeiro do Sul comes the challenge of providing a specific discipline dealing with the concepts of STS, offered as a optional special, initially for students of Pedagogy and later, due to the interest of some students, for the course of Administration and Human Resources of this institution. The survey of previous conceptions of students enrolled in the Special Discipline Elective Science, Technology and Society (CTS DOP) on the triad of STS showed a great ignorance on the same theme. The reports reveal conceptions of students who approach the linear model of development. As to the generated expectations in terms of discipline, there stand out the desires of expansion of knowledge for possible applications in personal and professional life. This research aims to evaluate the current course, while identifying ways to improve and strengthen the STS movement in education.

  17. Design and system integration of the superconducting wiggler magnets for the Compact Linear Collider damping rings

    NASA Astrophysics Data System (ADS)

    Schoerling, Daniel; Antoniou, Fanouria; Bernhard, Axel; Bragin, Alexey; Karppinen, Mikko; Maccaferri, Remo; Mezentsev, Nikolay; Papaphilippou, Yannis; Peiffer, Peter; Rossmanith, Robert; Rumolo, Giovanni; Russenschuck, Stephan; Vobly, Pavel; Zolotarev, Konstantin

    2012-04-01

    To achieve high luminosity at the collision point of the Compact Linear Collider (CLIC), the normalized horizontal and vertical emittances of the electron and positron beams must be reduced to 500 and 4 nm before the beams enter the 1.5 TeV linear accelerators. An effective way to accomplish ultralow emittances with only small effects on the electron polarization is using damping rings operating at 2.86 GeV equipped with superconducting wiggler magnets. This paper describes a technical design concept for the CLIC damping wigglers.

  18. Linear aerospike engine. [for reusable single-stage-to-orbit vehicle

    NASA Technical Reports Server (NTRS)

    Kirby, F. M.; Martinez, A.

    1977-01-01

    A description is presented of a dual-fuel modular split-combustor linear aerospike engine concept. The considered engine represents an approach to an integrated engine for a reusable single-stage-to-orbit (SSTO) vehicle. The engine burns two fuels (hydrogen and a hydrocarbon) with oxygen in separate combustors. Combustion gases expand on a linear aerospike nozzle. An engine preliminary design is discussed. Attention is given to the evaluation process for selecting the optimum number of modules or divisions of the engine, aspects of cooling and power cycle balance, and details of engine operation.

  19. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpius, Peter Joseph; Myers, Steven Charles

    This presentation is a part of the DHS LSS spectroscopy course and provides an overview of the following concepts: detector system components, intrinsic and absolute efficiency, resolution and linearity, and operational issues and limits.

  1. Analysis of a flare-director concept for an externally blown flap STOL aircraft

    NASA Technical Reports Server (NTRS)

    Middleton, D. B.

    1974-01-01

    A flare-director concept involving a thrust-required flare-guidance equation was developed and tested on a moving-base simulator. The equation gives a signal to command thrust as a linear function of the errors between the variables thrust, altitude, and altitude rate and corresponding values on a desired reference flare trajectory. During the simulator landing tests this signal drove either the horizontal command bar of the aircraft's flight director or a thrust-command dot on a head-up virtual-image display of a flare director. It was also used as the input to a simple autoflare system. An externally blown flap STOL (short take-off and landing) aircraft (with considerable stability and control augmentation) was modeled for the landing tests. The pilots considered the flare director a valuable guide for executing a proper flare-thrust program under instrument-landing conditions, but were reluctant to make any use of the head-up display when they were performing the landings visually.

  2. Photothermal nanodrugs: potential of TNF-gold nanospheres for cancer theranostics

    PubMed Central

    Shao, Jingwei; Griffin, Robert J.; Galanzha, Ekaterina I.; Kim, Jin-Woo; Koonce, Nathan; Webber, Jessica; Mustafa, Thikra; Biris, Alexandru S.; Nedosekin, Dmitry A.; Zharov, Vladimir P.

    2013-01-01

    Nanotechnology has been extensively explored for drug delivery. Here, we introduce the concept of a nanodrug based on synergy of photothermally-activated physical and biological effects in nanoparticle-drug conjugates. To prove this concept, we utilized tumor necrosis factor-alpha coated gold nanospheres (Au-TNF) heated by laser pulses. To enhance photothermal efficiency in near-infrared window of tissue transparency we explored slightly ellipsoidal nanoparticles, its clustering, and laser-induced nonlinear dynamic phenomena leading to amplification and spectral sharpening of photothermal and photoacoustic resonances red-shifted relatively to linear plasmonic resonances. Using a murine carcinoma model, we demonstrated higher therapy efficacy of Au-TNF conjugates compared to laser and Au-TNF alone or laser with TNF-free gold nanospheres. The photothermal activation of low toxicity Au-TNF conjugates, which are in phase II trials in humans, with a laser approved for medical applications opens new avenues in the development of clinically relevant nanodrugs with synergistic antitumor theranostic action. PMID:23443065

  3. A multidimensional model of the effect of gravity on the spatial orientation of the monkey

    NASA Technical Reports Server (NTRS)

    Merfeld, D. M.; Young, L. R.; Oman, C. M.; Shelhamer, M. J.

    1993-01-01

    A "sensory conflict" model of spatial orientation was developed. This mathematical model was based on concepts derived from observer theory, optimal observer theory, and the mathematical properties of coordinate rotations. The primary hypothesis is that the central nervous system of the squirrel monkey incorporates information about body dynamics and sensory dynamics to develop an internal model. The output of this central model (expected sensory afference) is compared to the actual sensory afference, with the difference defined as "sensory conflict." The sensory conflict information is, in turn, used to drive central estimates of angular velocity ("velocity storage"), gravity ("gravity storage"), and linear acceleration ("acceleration storage") toward more accurate values. The model successfully predicts "velocity storage" during rotation about an earth-vertical axis. The model also successfully predicts that the time constant of the horizontal vestibulo-ocular reflex is reduced and that the axis of eye rotation shifts toward alignment with gravity following postrotatory tilt. Finally, the model predicts the bias, modulation, and decay components that have been observed during off-vertical axis rotations (OVAR).

  4. Polish Teachers' Conceptions of and Approaches to the Teaching of Linear Equations to Grade Six Students: An Exploratory Case Study

    ERIC Educational Resources Information Center

    Marschall, Gosia; Andrews, Paul

    2015-01-01

    In this article we present an exploratory case study of six Polish teachers' perspectives on the teaching of linear equations to grade six students. Data, which derived from semi-structured interviews, were analysed against an extant framework and yielded a number of commonly held beliefs about what teachers aimed to achieve and how they would…

  5. Lyapunov stability and its application to systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Kennedy, E. W.

    1979-01-01

    An outline and a brief introduction to some of the concepts and implications of Lyapunov stability theory are presented. Various aspects of the theory are illustrated by the inclusion of eight examples, including the Cartesian coordinate equations of the two-body problem, linear and nonlinear (Van der Pol's equation) oscillatory systems, and the linearized Kustaanheimo-Stiefel element equations for the unperturbed two-body problem.

  6. Non-Linear Editing for the Smaller College-Level Production Program, Rev. 2.0.

    ERIC Educational Resources Information Center

    Tetzlaff, David

    This paper focuses on a specific topic and contention: Non-linear editing earns its place in a liberal arts setting because it is a superior tool to teach the concepts of how moving picture discourse is constructed through editing. The paper first points out that most students at small liberal arts colleges are not going to wind up working…

  7. On the four-dimensional holoraumy of the 4D, 𝒩 = 1 complex linear supermultiplet

    NASA Astrophysics Data System (ADS)

    Caldwell, Wesley; Diaz, Alejandro N.; Friend, Isaac; Gates, S. James; Harmalkar, Siddhartha; Lambert-Brown, Tamar; Lay, Daniel; Martirosova, Karina; Meszaros, Victor A.; Omokanwaye, Mayowa; Rudman, Shaina; Shin, Daeljuck; Vershov, Anthony

    2018-04-01

    We present arguments to support the existence of weight spaces for supersymmetric field theories and identify the calculations of information about supermultiplets to define such spaces via the concept of “holoraumy.” For the first time, this is extended to the complex linear superfield by a calculation of the commutator of supercovariant derivatives on all of its component fields.

  8. A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1980-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.

  9. A Position Tracking System Using MARG Sensors

    DTIC Science & Technology

    2007-12-01

    42 3. Correcting the Angles by Removing Drifts .....................................44 4. Various...assumed to be zero. It was further assumed that the drift was linear. xiv Thus, the linear drift was removed from the computed velocity to achieve more...gate cycle was able to be analyzed. One of these concepts is the theory of an American prosthesis by A. A. Mark, in which he divided the gate in

  10. A 400 KHz line rate 2048 pixel modular SWIR linear array for earth observation applications

    NASA Astrophysics Data System (ADS)

    Anchlia, Ankur; Vinella, Rosa M.; Wouters, Kristof; Gielen, Daphne; Hooylaerts, Peter; Deroo, Pieter; Ruythooren, Wouter; van der Zanden, Koen; Vermeiren, Jan; Merken, Patrick

    2015-10-01

    In this paper, we report about a family of linear imaging FPAs sensitive in the [0.9 - 1.7um] band, developed for high speed applications such as LIDAR, wavelength references and OCT analyzers and also for earth observation applications. Fast linear FPAs can also be used in a wide variety of terrestrial applications, including high speed sorting, electro- and photo-luminesce and medical applications. The arrays are based on a modular ROIC design concept: modules of 512 pixels are stitched during fabrication to achieve 512, 1024 and 2048 pixel arrays. In principle, this concept can be extended to any multiple of 512 pixels, the limiting factor being the pixel yield of long InGaAs arrays and the CTE differences in the hybrid setup. Each 512-pixel module has its own on-chip digital sequencer, analog readout chain and 4 output buffers. This modular concept enables a long-linear array to run at a high line rate of 400 KHz irrespective of the array length, which limits the line rate in a traditional linear array. The pixel has a pitch of 12.5um. The detector frontend is based on CTIA (Capacitor Trans-impedance Amplifier), having 5 selectable integration capacitors giving full well from 62x103e- (gain0) to 40x106e- (gain4). An auto-zero circuit limits the detector bias non-uniformity to 5-10mV across broad intensity levels, limiting the input referred dark signal noise to 20e-rms for Tint=3ms at room temperature. An on-chip CDS that follows the CTIA facilitates removal of Reset/KTC noise, CTIA offsets and most of the 1/f noise. The measured noise of the ROIC is 35e-rms in gain0. At a master clock rate of 60MHz and a minimum integration time of 1.4us, the FPAs reach the highest line rate of 400 KHz.

  11. Experimental aerodynamic and static elastic deformation characterization of low aspect ratio flexible fixed wings applied to micro aerial vehicles

    NASA Astrophysics Data System (ADS)

    Albertani, Roberto

    The concept of micro aerial vehicles (MAVs) is for a small, inexpensive and sometimes expendable platform, flying by remote pilot, in the field or autonomously. Because of the requirement to be flown either by almost inexperienced pilots or by autonomous control, they need to have very reliable and benevolent flying characteristics drive the design guidelines. A class of vehicles designed by the University of Florida adopts a flexible-wing concept, featuring a carbon fiber skeleton and a thin extensible latex membrane skin. Another typical feature of MAVs is a wingspan to propeller diameter ratio of two or less, generating a substantial influence on the vehicle aerodynamics. The main objectives of this research are to elucidate and document the static elastic flow-structure interactions in terms of measurements of the aerodynamic coefficients and wings' deformation as well as to substantiate the proposed inferences regarding the influence of the wings' structural flexibility on their performance; furthermore the research will provide experimental data to support the validation of CFD and FEA numerical models. A unique facility was developed at the University of Florida to implement a combination of a low speed wind tunnel and a visual image correlation system. The models tested in the wind tunnel were fabricated at the University MAV lab and consisted of a series of ten models with an identical geometry but differing in levels of structural flexibility and deformation characteristics. Results in terms of full-field displacements and aerodynamic coefficients from wind tunnel tests for various wind velocities and angles of attack are presented to demonstrate the deformation of the wing under steady aerodynamic load. The steady state effects of the propeller slipstream on the flexible wing's shape and its performance are also investigated. Analytical models of the aerodynamic and propulsion characteristics are proposed based on a multi dimensional linear regression analysis of non-linear functions. Conclusions are presented regarding the effects of the wing flexibility on some of the aerodynamic characteristics, including the effects of the propeller on the vehicle characteristics. Recommendations for future work will conclude this work.

  12. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The applicability and advantages are shown in a synthetic example. Therefor, we consider a contaminant source, posing a threat on a drinking water well in an aquifer. Furthermore, we assume uncertainty in geostatistical parameters, boundary conditions and hydraulic gradient. The two mentioned measures evaluate the sensitivity of (1) general prediction confidence and (2) exceedance probability of a legal regulatory threshold value on sampling locations.

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption.

    PubMed

    Midgley, Stewart; Schleich, Nanette

    2015-05-01

    A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV.

  15. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  16. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  17. Modeling soybean canopy resistance from micrometeorological and plant variables for estimating evapotranspiration using one-step Penman-Monteith approach

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis; Payero, Jose; Marek, Thomas; Porter, Dana

    2013-12-01

    Canopy resistance (rc) is one of the most important variables in evapotranspiration, agronomy, hydrology and climate change studies that link vegetation response to changing environmental and climatic variables. This study investigates the concept of generalized nonlinear/linear modeling approach of rc from micrometeorological and plant variables for soybean [Glycine max (L.) Merr.] canopy at different climatic zones in Nebraska, USA (Clay Center, Geneva, Holdrege and North Platte). Eight models estimating rc as a function of different combination of micrometeorological and plant variables are presented. The models integrated the linear and non-linear effects of regulating variables (net radiation, Rn; relative humidity, RH; wind speed, U3; air temperature, Ta; vapor pressure deficit, VPD; leaf area index, LAI; aerodynamic resistance, ra; and solar zenith angle, Za) to predict hourly rc. The most complex rc model has all regulating variables and the simplest model has only Rn, Ta and RH. The rc models were developed at Clay Center in the growing season of 2007 and applied to other independent sites and years. The predicted rc for the growing seasons at four locations were then used to estimate actual crop evapotranspiration (ETc) as a one-step process using the Penman-Monteith model and compared to the measured data at all locations. The models were able to account for 66-93% of the variability in measured hourly ETc across locations. Models without LAI generally underperformed and underestimated due to overestimation of rc, especially during full canopy cover stage. Using vapor pressure deficit or relative humidity in the models had similar effect on estimating rc. The root squared error (RSE) between measured and estimated ETc was about 0.07 mm h-1 for most of the models at Clay Center, Geneva and Holdrege. At North Platte, RSE was above 0.10 mm h-1. The results at different sites and different growing seasons demonstrate the robustness and consistency of the models in estimating soybean rc, which is encouraging towards the general application of one-step estimation of soybean canopy ETc in practice using the Penman-Monteith model and could aid in enhancing the utilization of the approach by irrigation and water management community.

  18. The conceptual basis of mathematics in cardiology IV: statistics and model fitting.

    PubMed

    Bates, Jason H T; Sobel, Burton E

    2003-06-01

    This is the fourth in a series of four articles developed for the readers of Coronary Artery Disease. Without language ideas cannot be articulated. What may not be so immediately obvious is that they cannot be formulated either. One of the essential languages of cardiology is mathematics. Unfortunately, medical education does not emphasize, and in fact, often neglects empowering physicians to think mathematically. Reference to statistics, conditional probability, multicompartmental modeling, algebra, calculus and transforms is common but often without provision of genuine conceptual understanding. At the University of Vermont College of Medicine, Professor Bates developed a course designed to address these deficiencies. The course covered mathematical principles pertinent to clinical cardiovascular and pulmonary medicine and research. It focused on fundamental concepts to facilitate formulation and grasp of ideas. This series of four articles was developed to make the material available for a wider audience. The articles will be published sequentially in Coronary Artery Disease. Beginning with fundamental axioms and basic algebraic manipulations they address algebra, function and graph theory, real and complex numbers, calculus and differential equations, mathematical modeling, linear system theory and integral transforms and statistical theory. The principles and concepts they address provide the foundation needed for in-depth study of any of these topics. Perhaps of even more importance, they should empower cardiologists and cardiovascular researchers to utilize the language of mathematics in assessing the phenomena of immediate pertinence to diagnosis, pathophysiology and therapeutics. The presentations are interposed with queries (by Coronary Artery Disease abbreviated as CAD) simulating the nature of interactions that occurred during the course itself. Each article concludes with one or more examples illustrating application of the concepts covered to cardiovascular medicine and biology.

  19. Dual Incorporation of the in vitro Data (IC50) and in vivo (Cmax) Data for the Prediction of Area Under the Curve (AUC) for Statins using Regression Models Developed for Either Pravastatin or Simvastatin.

    PubMed

    Srinivas, N R

    2016-08-01

    Linear regression models utilizing a single time point (Cmax) has been reported for pravastatin and simvastatin. A new model was developed for the prediction of AUC of statins that utilized the slopes of the above 2 models, with pharmacokinetic (Cmax) and a pharmacodynamic (IC50 value) components for the statins. The prediction of AUCs for various statins (pravastatin, atorvastatin, simvastatin and rosuvastatin) was carried out using the newly developed dual pharmacokinetic and pharmacodynamic model. Generally, the AUC predictions were contained within 0.5 to 2-fold difference of the observed AUC suggesting utility of the new models. The root mean square error predictions were<45% for the 2 models. On the basis of the present work, it is feasible to utilize both pharmacokinetic (Cmax) and pharmacodynamic (IC50) data for effectively predicting the AUC for statins. Such a new concept as described in the work may have utility in both drug discovery and development stages. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Spectral Behavior of a Linearized Land-Atmosphere Model: Applications to Hydrometeorology

    NASA Astrophysics Data System (ADS)

    Gentine, P.; Entekhabi, D.; Polcher, J.

    2008-12-01

    The present study develops an improved version of the linearized land-atmosphere model first introduced by Lettau (1951). This model is used to investigate the spectral response of land-surface variables to a daily forcing of incoming radiation at the land-surface. An analytical solution of the problem is found in the form of temporal Fourier series and gives the atmospheric boundary-layer and soil profiles of state variables (potential temperature, specific humidity, sensible and latent heat fluxes). Moreover the spectral dependency of surface variables is expressed as function of land-surface parameters (friction velocity, vegetation height, aerodynamic resistance, stomatal conductance). This original approach has several advantages: First, the model only requires little data to work and perform well: only time series of incoming radiation at the land-surface, mean specific humidity and temperature at any given height are required. These inputs being widely available over the globe, the model can easily be run and tested under various conditions. The model will also help analysing the diurnal shape and frequency dependency of surface variables and soil-ABL profiles. In particular, a strong emphasis is being placed on the explanation and prediction of Evaporative Fraction (EF) and Bowen Ratio diurnal shapes. EF is shown to remain a diurnal constant under restricting conditions: fair and dry weather, with strong solar radiation and no clouds. Moreover, the EF pseudo-constancy value is found and given as function of surface parameters, such as aerodynamic resistance and stomatal conductance. Then, application of the model for the conception of remote-sensing tools, according to the temporal resolution of the sensor, will also be discussed. Finally, possible extensions and improvement of the model will be discussed.

Top