Class and Home Problems: Humidification, a True "Home" Problem for p. Chemical Engineer
ERIC Educational Resources Information Center
Condoret, Jean-Stephane
2012-01-01
The problem of maintaining hygrothermal comfort in a house is addressed using the chemical engineer's toolbox. A simple dynamic modelling proved to give a good description of the humidification of the house in winter, using a domestic humidifier. Parameters of the model were identified from a simple experiment. Surprising results, especially…
A simple technique to increase profits in wood products marketing
George B. Harpole
1971-01-01
Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...
HIA, the next step: Defining models and roles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putters, Kim
If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Using Algorithms in Solving Synapse Transmission Problems.
ERIC Educational Resources Information Center
Stencel, John E.
1992-01-01
Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…
ERIC Educational Resources Information Center
Wood, Gordon W.
1975-01-01
Describes exercises using simple ball and stick models which students with no chemistry background can solve in the context of the original discovery. Examples include the tartaric acid and benzene problems. (GS)
Clairvoyant fusion: a new methodology for designing robust detection algorithms
NASA Astrophysics Data System (ADS)
Schaum, Alan
2016-10-01
Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.
Symmetry Breaking, Unification, and Theories Beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Yasunori
2009-07-31
A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressingmore » the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.« less
GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL
Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
addressed with distributed models that can compute runoff and erosion at different spatial a...
Equilibria of perceptrons for simple contingency problems.
Dawson, Michael R W; Dupuis, Brian
2012-08-01
The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.
The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.
ERIC Educational Resources Information Center
Gilbert, R.
1979-01-01
Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)
Resource-Competing Oscillator Network as a Model of Amoeba-Based Neurocomputer
NASA Astrophysics Data System (ADS)
Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki
An amoeboid organism, Physarum, exhibits rich spatiotemporal oscillatory behavior and various computational capabilities. Previously, the authors created a recurrent neurocomputer incorporating the amoeba as a computing substrate to solve optimization problems. In this paper, considering the amoeba to be a network of oscillators coupled such that they compete for constant amounts of resources, we present a model of the amoeba-based neurocomputer. The model generates a number of oscillation modes and produces not only simple behavior to stabilize a single mode but also complex behavior to spontaneously switch among different modes, which reproduces well the experimentally observed behavior of the amoeba. To explore the significance of the complex behavior, we set a test problem used to compare computational performances of the oscillation modes. The problem is a kind of optimization problem of how to allocate a limited amount of resource to oscillators such that conflicts among them can be minimized. We show that the complex behavior enables to attain a wider variety of solutions to the problem and produces better performances compared with the simple behavior.
Building Mathematical Models of Simple Harmonic and Damped Motion.
ERIC Educational Resources Information Center
Edwards, Thomas
1995-01-01
By developing a sequence of mathematical models of harmonic motion, shows that mathematical models are not right or wrong, but instead are better or poorer representations of the problem situation. (MKR)
"Compacted" procedures for adults' simple addition: A review and critique of the evidence.
Chen, Yalin; Campbell, Jamie I D
2018-04-01
We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.
Investigating decoherence in a simple system
NASA Technical Reports Server (NTRS)
Albrecht, Andreas
1991-01-01
The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.
Operator priming and generalization of practice in adults' simple arithmetic.
Chen, Yalin; Campbell, Jamie I D
2016-04-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).
Simple mental addition in children with and without mild mental retardation.
Janssen, R; De Boeck, P; Viaene, M; Vallaeys, L
1999-11-01
The speeded performance on simple mental addition problems of 6- and 7-year-old children with and without mild mental retardation is modeled from a person perspective and an item perspective. On the person side, it was found that a single cognitive dimension spanned the performance differences between the two ability groups. However, a discontinuity, or "jump," was observed in the performance of the normal ability group on the easier items. On the item side, the addition problems were almost perfectly ordered in difficulty according to their problem size. Differences in difficulty were explained by factors related to the difficulty of executing nonretrieval strategies. All findings were interpreted within the framework of Siegler's (e.g., R. S. Siegler & C. Shipley, 1995) model of children's strategy choices in arithmetic. Models from item response theory were used to test the hypotheses. Copyright 1999 Academic Press.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Getting to the Bottom of a Ladder Problem
ERIC Educational Resources Information Center
McCartney, Mark
2002-01-01
In this paper, the author introduces a simple problem relating to a pair of ladders. A mathematical model of the problem produces an equation which can be solved in a number of ways using mathematics appropriate to "A" level students or first year undergraduates. The author concludes that the ladder problem can be used in class to develop and…
This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
Four simple ocean carbon models
NASA Technical Reports Server (NTRS)
Moore, Berrien, III
1992-01-01
This paper briefly reviews the key processes that determine oceanic CO2 uptake and sets this description within the context of four simple ocean carbon models. These models capture, in varying degrees, these key processes and establish a clear foundation for more realistic models that incorporate more directly the underlying physics and biology of the ocean rather than relying on simple parametric schemes. The purpose of this paper is more pedagogical than purely scientific. The problems encountered by current attempts to understand the global carbon cycle not only require our efforts but set a demand for a new generation of scientist, and it is hoped that this paper and the text in which it appears will help in this development.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
The Crisis Prevention Analysis Model.
ERIC Educational Resources Information Center
Hoverland, Hal; And Others
1986-01-01
The Crisis Prevention Analysis model offers a framework for simple, straightforward self-appraisal by college administrators of problems in the following areas: fiscal, faculty and staff, support functions, and goals and attitudes areas. (MSE)
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Diffusion of a new intermediate product in a simple 'classical-Schumpeterian' model.
Haas, David
2018-05-01
This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long-period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of 'forced saving' and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by-products of the transformation process.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Population and Pollution in the United States
ERIC Educational Resources Information Center
Ridker, Ronald G.
1972-01-01
Analyzes a simple model relating environmental pollution to population and per capita income and concludes that no single cause is sufficient to explain.... environmental problems, and that there is little about the pollution problems.... of the next 50 years that is inevitable." (Author/AL)
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
NASA Astrophysics Data System (ADS)
Rahimi, Zaher; Sumelka, Wojciech; Yang, Xiao-Jun
2017-11-01
The application of fractional calculus in fractional models (FMs) makes them more flexible than integer models inasmuch they can conclude all of integer and non-integer operators. In other words FMs let us use more potential of mathematics to modeling physical phenomena due to the use of both integer and fractional operators to present a better modeling of problems, which makes them more flexible and powerful. In the present work, a new fractional nonlocal model has been proposed, which has a simple form and can be used in different problems due to the simple form of numerical solutions. Then the model has been used to govern equations of the motion of the Timoshenko beam theory (TBT) and Euler-Bernoulli beam theory (EBT). Next, free vibration of the Timoshenko and Euler-Bernoulli simply-supported (S-S) beam has been investigated. The Galerkin weighted residual method has been used to solve the non-linear governing equations.
DIVERSE MODELS FOR SOLVING CONTRASTING OUTFALL PROBLEMS
Mixing zone initial dilution and far-field models are useful for assuring that water quality criteria will be met when specific outfall discharge criteria are applied. Presented here is a selective review of mixing zone initial dilution models and relatively simple far-field tran...
Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.
2008-01-01
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.
1975-01-01
The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.
Diffusion of a new intermediate product in a simple ‘classical‐Schumpeterian’ model
2017-01-01
Abstract This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long‐period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of ‘forced saving’ and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by‐products of the transformation process. PMID:29695874
Ancient Paradoxes Can Extend Mathematical Thinking
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
This article presents the Snail problem, a relatively simple challenge about motion that offers engaging extensions involving the notion of infinity. It encourages students in grades 5-9 to connect mathematics learning to logic, history, and philosophy through analyzing the problem, making sense of quantitative relationships, and modeling with…
Kopyt, Paweł; Celuch, Małgorzata
2007-01-01
A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.
Simulated Three-Point Problems.
ERIC Educational Resources Information Center
Leyden, Michael B.
1979-01-01
The concept of sloping bedrock strata is portrayed by simple construction of a cardboard model. By use of wires and graph paper, students simulate the drilling of wells and use standard mathematical operations to determine strike and dip of the model stratum. (RE)
Linear complementarity formulation for 3D frictional sliding problems
Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc
2012-01-01
Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.
Data-driven indexing mechanism for the recognition of polyhedral objects
NASA Astrophysics Data System (ADS)
McLean, Stewart; Horan, Peter; Caelli, Terry M.
1992-02-01
This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.
Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.
ERIC Educational Resources Information Center
Michael, Joel A.; Rovick, Allen A.
1986-01-01
Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)
Simple predictive model for Early Childhood Caries of Chilean children.
Fierro Monti, Claudia; Pérez Flores, M; Brunotto, M
2014-01-01
Early Childhood Caries (ECC), in both industrialized and developing countries, is the most prevalent chronic disease in childhood and it is still a health public problem, affecting mainly populations considered as vulnerable, despite being preventable. The purpose of this study was to obtain a simple predictive model based on risk factors for improving public health strategies for ECC prevention for 3-5 year-old children. Clinical, environmental and psycho-socio-cultural data of children (n=250) aged 3-5 years, of both genders, from the Health Centers, were recorded in a Clinical History and Behavioral Survey. 24% of children presented behavioral problems (bizarre behavior was the main feature observed as behavioral problems). The variables associated to dmf ?4 were: bad children temperament (OR=2.43 [1.34, 4.40]) and home stress (OR=3.14 [1.54, 6.41]). It was observed that the model for male gender has higher accuracy for ECC (AUC= 78%, p-value=0.000) than others. Based on the results, we proposed a model where oral hygiene, sugar intake, male gender, and difficult temperament are main factors for predicting ECC. This model could be a promising tool for cost-effective early childhood caries control.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
Using Synchronous Boolean Networks to Model Several Phenomena of Collective Behavior
Kochemazov, Stepan; Semenov, Alexander
2014-01-01
In this paper, we propose an approach for modeling and analysis of a number of phenomena of collective behavior. By collectives we mean multi-agent systems that transition from one state to another at discrete moments of time. The behavior of a member of a collective (agent) is called conforming if the opinion of this agent at current time moment conforms to the opinion of some other agents at the previous time moment. We presume that at each moment of time every agent makes a decision by choosing from the set (where 1-decision corresponds to action and 0-decision corresponds to inaction). In our approach we model collective behavior with synchronous Boolean networks. We presume that in a network there can be agents that act at every moment of time. Such agents are called instigators. Also there can be agents that never act. Such agents are called loyalists. Agents that are neither instigators nor loyalists are called simple agents. We study two combinatorial problems. The first problem is to find a disposition of instigators that in several time moments transforms a network from a state where the majority of simple agents are inactive to a state with the majority of active agents. The second problem is to find a disposition of loyalists that returns the network to a state with the majority of inactive agents. Similar problems are studied for networks in which simple agents demonstrate the contrary to conforming behavior that we call anticonforming. We obtained several theoretical results regarding the behavior of collectives of agents with conforming or anticonforming behavior. In computational experiments we solved the described problems for randomly generated networks with several hundred vertices. We reduced corresponding combinatorial problems to the Boolean satisfiability problem (SAT) and used modern SAT solvers to solve the instances obtained. PMID:25526612
Rocket Engine Oscillation Diagnostics
NASA Technical Reports Server (NTRS)
Nesman, Tom; Turner, James E. (Technical Monitor)
2002-01-01
Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.
Variable Step-Size Selection Methods for Implicit Integration Schemes
2005-10-01
for ρk numerically. 23 4 Examples In this section we explore this variable step-size selection method for two problems, the Lotka - Volterra model and...the Kepler problem. 4.1 The Lotka - Volterra Model For this example we consider the Lotka - Volterra model of a simple predator- prey system from...problems. Consider this variation to the Lotka - Volterra problem: u̇ v̇ = u2v(v − 2) v2u(1− u) = f(u, v); t ∈ [0, 50
Correlation of spacecraft thermal mathematical models to reference data
NASA Astrophysics Data System (ADS)
Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier
2018-03-01
Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.
Teaching New Keynesian Open Economy Macroeconomics at the Intermediate Level
ERIC Educational Resources Information Center
Bofinger, Peter; Mayer, Eric; Wollmershauser, Timo
2009-01-01
For the open economy, the workhorse model in intermediate textbooks still is the Mundell-Fleming model, which basically extends the investment and savings, liquidity preference and money supply (IS-LM) model to open economy problems. The authors present a simple New Keynesian model of the open economy that introduces open economy considerations…
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Teaching Mathematical Modelling: Demonstrating Enrichment and Elaboration
ERIC Educational Resources Information Center
Warwick, Jon
2015-01-01
This paper uses a series of models to illustrate one of the fundamental processes of model building--that of enrichment and elaboration. The paper describes how a problem context is given which allows a series of models to be developed from a simple initial model using a queuing theory framework. The process encourages students to think about the…
A Simple Interactive Introduction to Teaching Genetic Engineering
ERIC Educational Resources Information Center
Child, Paula
2013-01-01
In the UK, at key stage 4, students aged 14-15 studying GCSE Core Science or Unit 1 of the GCSE Biology course are required to be able to describe the process of genetic engineering to produce bacteria that can produce insulin. The simple interactive introduction described in this article allows students to consider the problem, devise a model and…
ERIC Educational Resources Information Center
Sparks, Richard L.; Luebbers, Julie
2018-01-01
Conventional wisdom suggests that students classified as learning disabled will exhibit difficulties with foreign language (FL) learning, but evidence has not supported a relationship between FL learning problems and learning disabilities. The simple view of reading model posits that reading comprehension is the product of word decoding and…
Planning and assessment in land and water resource management are evolving from simple, local-scale problems toward complex, spatially explicit regional ones. Such problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and t...
The Development from Effortful to Automatic Processing in Mathematical Cognition.
ERIC Educational Resources Information Center
Kaye, Daniel B.; And Others
This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…
Forest management and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buongiorno, J.; Gilless, J.K.
1987-01-01
This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Learned navigation in unknown terrains: A retraction method
NASA Technical Reports Server (NTRS)
Rao, Nageswara S. V.; Stoltzfus, N.; Iyengar, S. Sitharama
1989-01-01
The problem of learned navigation of a circular robot R, of radius delta (is greater than or equal to 0), through a terrain whose model is not a-priori known is considered. Two-dimensional finite-sized terrains populated by an unknown (but, finite) number of simple polygonal obstacles are also considered. The number and locations of the vertices of each obstacle are unknown to R. R is equipped with a sensor system that detects all vertices and edges that are visible from its present location. In this context two problems are covered. In the visit problem, the robot is required to visit a sequence of destination points, and in the terrain model acquisition problem, the robot is required to acquire the complete model of the terrain. An algorithmic framework is presented for solving these two problems using a retraction of the freespace onto the Voronoi diagram of the terrain. Algorithms are then presented to solve the visit problem and the terrain model acquisition problem.
ERIC Educational Resources Information Center
Toumasis, Charalampos
2004-01-01
Emphasis on problem solving and mathematical modeling has gained considerable attention in the last few years. Connecting mathematics to other subjects and to the real world outside the classroom has received increased attention in mathematics programs. This article describes an application of simple differential equations in the field of…
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
Analysis of aircraft longitudinal handling qualities
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Regularization techniques for backward--in--time evolutionary PDE problems
NASA Astrophysics Data System (ADS)
Gustafsson, Jonathan; Protas, Bartosz
2007-11-01
Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.
Intelligent Control of Flexible-Joint Robotic Manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Gallegos, G.
1997-01-01
This paper considers the trajectory tracking problem for uncertain rigid-link. flexible.joint manipulators, and presents a new intelligent controller as a solution to this problem. The proposed control strategy is simple and computationally efficient, requires little information concerning either the manipulator or actuator/transmission models and ensures uniform boundedness of all signals and arbitrarily accurate task-space trajectory tracking.
A note on the modelling of circular smallholder migration.
Bigsten, A
1988-01-01
"It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt
Solving quantum optimal control problems using Clebsch variables and Lin constraints
NASA Astrophysics Data System (ADS)
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
NASA Astrophysics Data System (ADS)
Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.
2007-11-01
In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
The Free Energy in the Derrida-Retaux Recursive Model
NASA Astrophysics Data System (ADS)
Hu, Yueyun; Shi, Zhan
2018-05-01
We are interested in a simple max-type recursive model studied by Derrida and Retaux (J Stat Phys 156:268-290, 2014) in the context of a physics problem, and find a wide range for the exponent in the free energy in the nearly supercritical regime.
Nick-free formation of reciprocal heteroduplexes: a simple solution to the topological problem.
Wilson, J H
1979-01-01
Because the individual strands of DNA are intertwined, formation of heteroduplex structures between duplexes--as in presumed recombination intermediates--presents a topological puzzle, known as the winding problem. Previous approaches to this problem have assumed that single-strand breaks are required to permit formation of fully coiled heteroduplexes. This paper describes a simple, nick-free solution to the winding problem that satisfies all topological constraints. Homologous duplexes associated by their minor-groove surfaces can switch strand pairing to form reciprocal heteroduplexes that coil together into a compact, four-stranded helix throughout the region of pairing. Model building shows that this fused heteroduplex structure is plausible, being composed entirely of right-handed primary helices with Watson-Crick base pairing throughout. Its simplicity of formation, structural symmetry, and high degree of specificity are suggestive of a natural mechanism for alignment by base pairing between intact homologous duplexes. Implications for genetic recombination are discussed. Images PMID:291028
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Born-Oppenheimer approximation for a singular system
NASA Astrophysics Data System (ADS)
Akbas, Haci; Turgut, O. Teoman
2018-01-01
We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.
Berger, Lawrence M; Bruch, Sarah K; Johnson, Elizabeth I; James, Sigrid; Rubin, David
2009-01-01
This study used data on 2,453 children aged 4-17 from the National Survey of Child and Adolescent Well-Being and 5 analytic methods that adjust for selection factors to estimate the impact of out-of-home placement on children's cognitive skills and behavior problems. Methods included ordinary least squares (OLS) regressions and residualized change, simple change, difference-in-difference, and fixed effects models. Models were estimated using the full sample and a matched sample generated by propensity scoring. Although results from the unmatched OLS and residualized change models suggested that out-of-home placement is associated with increased child behavior problems, estimates from models that more rigorously adjust for selection bias indicated that placement has little effect on children's cognitive skills or behavior problems.
When push comes to shove: Exclusion processes with nonlocal consequences
NASA Astrophysics Data System (ADS)
Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.
2015-11-01
Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.
The `Miracle' of Applicability? The Curious Case of the Simple Harmonic Oscillator
NASA Astrophysics Data System (ADS)
Bangu, Sorin; Moir, Robert H. C.
2018-05-01
The paper discusses to what extent the conceptual issues involved in solving the simple harmonic oscillator model fit Wigner's famous point that the applicability of mathematics borders on the miraculous. We argue that although there is ultimately nothing mysterious here, as is to be expected, a careful demonstration that this is so involves unexpected difficulties. Consequently, through the lens of this simple case we derive some insight into what is responsible for the appearance of mystery in more sophisticated examples of the Wigner problem.
The `Miracle' of Applicability? The Curious Case of the Simple Harmonic Oscillator
NASA Astrophysics Data System (ADS)
Bangu, Sorin; Moir, Robert H. C.
2018-03-01
The paper discusses to what extent the conceptual issues involved in solving the simple harmonic oscillator model fit Wigner's famous point that the applicability of mathematics borders on the miraculous. We argue that although there is ultimately nothing mysterious here, as is to be expected, a careful demonstration that this is so involves unexpected difficulties. Consequently, through the lens of this simple case we derive some insight into what is responsible for the appearance of mystery in more sophisticated examples of the Wigner problem.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Statistical mechanics of simple models of protein folding and design.
Pande, V S; Grosberg, A Y; Tanaka, T
1997-01-01
It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
A First Step towards Variational Methods in Engineering
ERIC Educational Resources Information Center
Periago, Francisco
2003-01-01
In this paper, a didactical proposal is presented to introduce the variational methods for solving boundary value problems to engineering students. Starting from a couple of simple models arising in linear elasticity and heat diffusion, the concept of weak solution for these models is motivated and the existence, uniqueness and continuous…
Oil-Price Shocks: Beyond Standard Aggregate Demand/Aggregate Supply Analysis.
ERIC Educational Resources Information Center
Elwood, S. Kirk
2001-01-01
Explores the problems of portraying oil-price shocks using the aggregate demand/aggregate supply model. Presents a simple modification of the model that differentiates between production and absorption of goods, which enables it to better reflect the effects of oil-price shocks on open economies. (RLH)
Solving Rational Expectations Models Using Excel
ERIC Educational Resources Information Center
Strulik, Holger
2004-01-01
Simple problems of discrete-time optimal control can be solved using a standard spreadsheet software. The employed-solution method of backward iteration is intuitively understandable, does not require any programming skills, and is easy to implement so that it is suitable for classroom exercises with rational-expectations models. The author…
A note on boundary-layer pumping
NASA Astrophysics Data System (ADS)
Smith, S. H.
1981-05-01
The simple model of strong blowing across an impulsively started rotating disc is considered. The model shows features present in the two basic problems of spin-up in a circular cylinder and the flow between counter-rotating discs. The role of boundary layer pumping appears to be crucial in both situations.
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Thermo-elasto-viscoplastic analysis of problems in extension and shear
NASA Technical Reports Server (NTRS)
Riff, R.; Simitses, G. J.
1987-01-01
The problems of extension and shear behavior of structural elements made of carbon steel and subjected to large thermomechanical loads are investigated. The analysis is based on nonlinear geometric and constitutive relations, and is expressed in a rate form. The material constitutive equations are capable of reproducing all nonisothermal, elasto-viscoplastic characteristics. The results of the test problems show that: (1) the formulation can accommodate very large strains and rotations; (2) the model incorporates the simplification associated with rate-insensitive elastic response without losing the ability to model a rate-temperature dependent yield strength and plasticity; and (3) the formulation does not display oscillatory behavior in the stresses for the simple shear problem.
A simple model for the falling cat problem
NASA Astrophysics Data System (ADS)
Essén, Hanno; Nordmark, Arne
2018-05-01
We introduce a specific four-particle, four degree-of-freedom model and calculate the rotation that can be achieved by purely internal torques and forces, keeping the total angular momentum zero. We argue that the model qualitatively explains much of the ability of a cat to land on its feet even though released from rest upside down.
Selected bibliography on the modeling and control of plant processes
NASA Technical Reports Server (NTRS)
Viswanathan, M. M.; Julich, P. M.
1972-01-01
A bibliography of information pertinent to the problem of simulating plants is presented. Detailed simulations of constituent pieces are necessary to justify simple models which may be used for analysis. Thus, this area of study is necessary to support the Earth Resources Program. The report sums up the present state of the problem of simulating vegetation. This area holds the hope of major benefits to mankind through understanding the ecology of a region and in improving agricultural yield.
NASA Astrophysics Data System (ADS)
Skinner, Brian
2016-09-01
Same-sex sexual behaviour is ubiquitous in the animal kingdom, but its adaptive origins remain a prominent puzzle. Here, I suggest the possibility that same-sex sexual behaviour arises as a consequence of the competition between an evolutionary drive for a wide diversity in traits, which improves the adaptability of a population, and a drive for sexual dichotomization of traits, which promotes opposite-sex attraction and increases the rate of reproduction. This trade-off is explored via a simple mathematical `toy model'. The model exhibits a number of interesting features and suggests a simple mathematical form for describing the sexual orientation continuum.
Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight
NASA Astrophysics Data System (ADS)
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.
Skinner, Brian
2016-09-01
Same-sex sexual behaviour is ubiquitous in the animal kingdom, but its adaptive origins remain a prominent puzzle. Here, I suggest the possibility that same-sex sexual behaviour arises as a consequence of the competition between an evolutionary drive for a wide diversity in traits, which improves the adaptability of a population, and a drive for sexual dichotomization of traits, which promotes opposite-sex attraction and increases the rate of reproduction. This trade-off is explored via a simple mathematical 'toy model'. The model exhibits a number of interesting features and suggests a simple mathematical form for describing the sexual orientation continuum.
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Spectral methods for partial differential equations
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Streett, C. L.; Zang, T. A.
1983-01-01
Origins of spectral methods, especially their relation to the Method of Weighted Residuals, are surveyed. Basic Fourier, Chebyshev, and Legendre spectral concepts are reviewed, and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic, and mixed type. Fluid dynamical applications are emphasized.
ERIC Educational Resources Information Center
Houston, Donald
1998-01-01
Discusses methodology to examine the problem of spatial mismatch of jobs, showing how the simple accessibility measures used by Daniel Immergluck (1998) are poor reflections of the availability of jobs to an individual and explaining why a gravity model is a favorable alternative. Also discusses the unsuitability of aggregate data for testing the…
Recent applications of spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
Origins of spectral methods, especially their relation to the method of weighted residuals, are surveyed. Basic Fourier and Chebyshev spectral concepts are reviewed and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic and mixzed type. Fluid dynamical applications are emphasized.
Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models.
Marsh, M C David
2017-01-06
Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, N_{vac}∼O(10^{272 000}), are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.
A bottom-up approach to the strong CP problem
NASA Astrophysics Data System (ADS)
Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.
2018-05-01
The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.
A gunner model for an AAA tracking task with interrupted observations
NASA Technical Reports Server (NTRS)
Yu, C. F.; Wei, K. C.; Vikmanis, M.
1982-01-01
The problem of modeling a trained human operator's tracking performance in an anti-aircraft system under various display blanking conditions is discussed. The input to the gunner is the observable tracking error subjected to repeated interruptions (blanking). A simple and effective gunner model was developed. The effect of blanking on the gunner's tracking performance is approached via modeling the observer and controller gains.
Finding the strong CP problem at the LHC
NASA Astrophysics Data System (ADS)
D'Agnolo, Raffaele Tito; Hook, Anson
2016-11-01
We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.
A simple homogeneous model for regular and irregular metallic wire media samples
NASA Astrophysics Data System (ADS)
Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.
2018-02-01
To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.
Generative Models in Deep Learning: Constraints for Galaxy Evolution
NASA Astrophysics Data System (ADS)
Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.
2018-01-01
New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.
Goal programming for land use planning.
Enoch F. Bell
1976-01-01
A simple transformation of the linear programing model used in land use planning to a goal programing model allows the multiple goals implied by multiple use management to be explicitly recognized. This report outlines the procedure for accomplishing the transformation and discusses problems with use of goal programing. Of particular concern are the expert opinions...
A Computer Model of Simple Forms of Learning.
ERIC Educational Resources Information Center
Jones, Thomas L.
A basic unsolved problem in science is that of understanding learning, the process by which people and machines use their experience in a situation to guide future action in similar situations. The ideas of Piaget, Pavlov, Hull, and other learning theorists, as well as previous heuristic programing models of human intelligence, stimulated this…
NASA Astrophysics Data System (ADS)
Beh, Kian Lim
2000-10-01
This study was designed to explore the effect of a typical traditional method of instruction in physics on the formation of useful mental models among college students for problem-solving using simple electric circuits as a context. The study was also aimed at providing a comprehensive description of the understanding regarding electric circuits among novices and experts. In order to achieve these objectives, the following two research approaches were employed: (1) A students survey to collect data from 268 physics students; and (2) An interview protocol to collect data from 23 physics students and 24 experts (including 10 electrical engineering graduates, 4 practicing electrical engineers, 2 secondary school physics teachers, 8 physics lecturers, and 4 electrical engineers). Among the major findings are: (1) Most students do not possess accurate models of simple electric circuits as presented implicitly in physics textbooks; (2) Most students display good procedural understanding for solving simple problems concerning electric circuits but have no in-depth conceptual understanding in terms of practical knowledge of current, voltage, resistance, and circuit connections; (3) Most students encounter difficulty in discerning parallel connections that are drawn in a non-conventional format; (4) After a year of college physics, students show significant improvement in areas, including practical knowledge of current and voltage, ability to compute effective resistance and capacitance, ability to identify circuit connections, and ability to solve problems; however, no significance was found in practical knowledge of resistance and ability to connect circuits; and (5) The differences and similarities between the physics students and the experts include: (a) Novices perceive parallel circuits more in terms of 'branch', 'current', and 'resistors with the same resistance' while experts perceive parallel circuits more in terms of 'node', 'voltage', and 'less resistance'; and (b) Both novices and experts use phrases such as 'side-by side' and 'one on top of the other' in describing parallel circuits which emphasize the geometry of the standard circuit drawing when describing parallel resistors.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Castorina, P; Delsanto, P P; Guiot, C
2006-05-12
A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
NASA Technical Reports Server (NTRS)
Childs, D. W.; Moyer, D. S.
1984-01-01
Attention is given to rotor dynamic problems that have been encountered and eliminated in the course of Space Shuttle Main Engine (SSME) development, as well as continuing, subsynchronous problems which are being encountered in the development of a 109-percent power level engine. The basic model for the SSME's High Pressure Oxygen Turbopump (HPOTP) encompasses a structural dynamic model for the rotor and housing, and component models for the liquid and gas seals, turbine clearance excitation forces, and impeller diffuser forces. Linear model results are used to examine the synchronous response and stability characteristics of the HPOTP, with attention to bearing load and stability problems associated with the second critical speed. Differences between linear and nonlinear model results are discussed and explained in terms of simple models. Simulation results indicate that while synchronous bearing loads can be reduced, subsynchronous motion is not eliminated by seal modifications.
The overconstraint of response time models: rethinking the scaling problem.
Donkin, Chris; Brown, Scott D; Heathcote, Andrew
2009-12-01
Theories of choice response time (RT) provide insight into the psychological underpinnings of simple decisions. Evidence accumulation (or sequential sampling) models are the most successful theories of choice RT. These models all have the same "scaling" property--that a subset of their parameters can be multiplied by the same amount without changing their predictions. This property means that a single parameter must be fixed to allow the estimation of the remaining parameters. In the present article, we show that the traditional solution to this problem has overconstrained these models, unnecessarily restricting their ability to account for data and making implicit--and therefore unexamined--psychological assumptions. We show that versions of these models that address the scaling problem in a minimal way can provide a better description of data than can their overconstrained counterparts, even when increased model complexity is taken into account.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
A solution to the surface intersection problem. [Boolean functions in geometric modeling
NASA Technical Reports Server (NTRS)
Timer, H. G.
1977-01-01
An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
A position-aware linear solid constitutive model for peridynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
String Fragmentation Model in Space Radiation Problems
NASA Technical Reports Server (NTRS)
Tang, Alfred; Johnson, Eloise (Editor); Norbury, John W.; Tripathi, R. K.
2002-01-01
String fragmentation models such as the Lund Model fit experimental particle production cross sections very well in the high-energy limit. This paper gives an introduction of the massless relativistic string in the Lund Model and shows how it can be modified with a simple assumption to produce formulas for meson production cross sections for space radiation research. The results of the string model are compared with inclusive pion production data from proton-proton collision experiments.
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
NASA Astrophysics Data System (ADS)
Belloul, M.; Engl, W.; Colin, A.; Panizza, P.; Ajdari, A.
2009-05-01
By studying the repartition of monodisperse droplets at a simple T junction, we show that the traffic of discrete fluid systems in microfluidic networks results from two competing mechanisms, whose significance is driven by confinement. Traffic is dominated by collisions occurring at the junction for small droplets and by collective hydrodynamic feedback for large ones. For each mechanism, we present simple models in terms of the pertinent dimensionless parameters of the problem.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…
Computing relative plate velocities: a primer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevis, M.
1987-08-01
Standard models of present-day plate motions are framed in terms of rates and poles of rotation, in accordance with the well-known theorem due to Euler. This article shows how computation of relative plate velocities from such models can be viewed as a simple problem in spherical trigonometry. A FORTRAN subroutine is provided to perform the necessary computations.
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Tracking trade transactions in water resource systems: A node-arc optimization formulation
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Huskova, Ivana; Harou, Julien J.
2013-05-01
We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Constraint Programming to Solve Maximal Density Still Life
NASA Astrophysics Data System (ADS)
Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil
The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Neural network for solving convex quadratic bilevel programming problems.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie
2014-03-01
In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Froggatt*, C. D.
2003-01-01
The quark-lepton mass problem and the ideas of mass protection are reviewed. The hierarchy problem and suggestions for its resolution, including Little Higgs models, are discussed. The Multiple Point Principle (MPP) is introduced and used within the Standard Model (SM) to predict the top quark and Higgs particle masses. Mass matrix ansätze are considered; in particular we discuss the lightest family mass generation model, in which all the quark mixing angles are successfully expressed in terms of simple expressions involving quark mass ratios. It is argued that an underlying chiral flavour symmetry is responsible for the hierarchical texture of the fermion mass matrices. The phenomenology of neutrino mass matrices is briefly discussed.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Tour of a Simple Trigonometry Problem
ERIC Educational Resources Information Center
Poon, Kin-Keung
2012-01-01
This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was…
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
NASA Astrophysics Data System (ADS)
Bisegna, Paolo; Caselli, Federica
2008-06-01
This paper presents a simple analytical expression for the effective complex conductivity of a periodic hexagonal arrangement of conductive circular cylinders embedded in a conductive matrix, with interfaces exhibiting a capacitive impedance. This composite material may be regarded as an idealized model of a biological tissue comprising tubular cells, such as skeletal muscle. The asymptotic homogenization method is adopted, and the corresponding local problem is solved by resorting to Weierstrass elliptic functions. The effectiveness of the present analytical result is proved by convergence analysis and comparison with finite-element solutions and existing models.
Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models
Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.
2007-01-01
each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
NASA Astrophysics Data System (ADS)
Mahmood, Ehab A.; Rana, Sohel; Hussin, Abdul Ghapor; Midi, Habshah
2016-06-01
The circular regression model may contain one or more data points which appear to be peculiar or inconsistent with the main part of the model. This may be occur due to recording errors, sudden short events, sampling under abnormal conditions etc. The existence of these data points "outliers" in the data set cause lot of problems in the research results and the conclusions. Therefore, we should identify them before applying statistical analysis. In this article, we aim to propose a statistic to identify outliers in the both of the response and explanatory variables of the simple circular regression model. Our proposed statistic is robust circular distance RCDxy and it is justified by the three robust measurements such as proportion of detection outliers, masking and swamping rates.
Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers
NASA Astrophysics Data System (ADS)
Lemyre Garneau, Mathieu
A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.
Role of gravity in preparative electrophoresis
NASA Technical Reports Server (NTRS)
Bier, M.
1975-01-01
The fundamental formulas of electrophoresis are derived microscopically and applied to the problem of isotachophoresis. A simple physical model of the isotachophoresis front is proposed. The front motion and structure are studied in the simplified case without convection, diffusion and non-electric external forces.
The influence of wind-tunnel walls on discrete frequency noise
NASA Technical Reports Server (NTRS)
Mosher, M.
1984-01-01
This paper describes an analytical model that can be used to examine the effects of wind-tunnel walls on discrete frequency noise. First, a complete physical model of an acoustic source in a wind tunnel is described, and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. Second, the simplified physical model is formulated as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. The integral equation has been solved with a panel program on a computer. Preliminary results from a simple model problem will be shown and compared with the approximate analytic solution.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2011-02-23
INTRODUCTION 35 2.2 GENERAL MODEL SETUP 36 2.2.1 Co-Simulation Principles 36 2.2.2 Double pendulum : a simple example 38 2.2.3 Description of numerical... pendulum sample problem 45 2.3 DISCUSSION OF APPROACH WITH RESPECT TO PROPOSED SUBTASKS 49 2.4 RESULTS DISCUSSION AND FUTURE WORK 49 TASK 3...Kim and Praehofer 2000]. 2.2.2 Double pendulum : a simple example In order to be able to evaluate co-simulation principles, specifically an
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Cerebellum-inspired neural network solution of the inverse kinematics problem.
Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan
2015-12-01
The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.
Did the ever dead outnumber the living and when? A birth-and-death approach
NASA Astrophysics Data System (ADS)
Avan, Jean; Grosjean, Nicolas; Huillet, Thierry
2015-02-01
This paper is an attempt to formalize analytically the question raised in 'World Population Explained: Do Dead People Outnumber Living, Or Vice Versa?' Huffington Post, Howard (2012). We start developing simple deterministic Malthusian growth models of the problem (with birth and death rates either constant or time-dependent) before running into both linear birth and death Markov chain models and age-structured models.
Mathematics applied to the climate system: outstanding challenges and recent progress
Williams, Paul D.; Cullen, Michael J. P.; Davey, Michael K.; Huthnance, John M.
2013-01-01
The societal need for reliable climate predictions and a proper assessment of their uncertainties is pressing. Uncertainties arise not only from initial conditions and forcing scenarios, but also from model formulation. Here, we identify and document three broad classes of problems, each representing what we regard to be an outstanding challenge in the area of mathematics applied to the climate system. First, there is the problem of the development and evaluation of simple physically based models of the global climate. Second, there is the problem of the development and evaluation of the components of complex models such as general circulation models. Third, there is the problem of the development and evaluation of appropriate statistical frameworks. We discuss these problems in turn, emphasizing the recent progress made by the papers presented in this Theme Issue. Many pressing challenges in climate science require closer collaboration between climate scientists, mathematicians and statisticians. We hope the papers contained in this Theme Issue will act as inspiration for such collaborations and for setting future research directions. PMID:23588054
An Inexpensive 2-D and 3-D Model of the Sarcomere as a Teaching Aid
ERIC Educational Resources Information Center
Rios, Vitor Passos; Bonfim, Vanessa Maria Gomes
2013-01-01
To address a common problem of teaching the sliding filament theory (that is, students have difficulty in visualizing how the component proteins of the sarcomere differ, how they organize themselves into a single working unit, and how they function in relation to each other), we have devised a simple model, with inexpensive materials, to be built…
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Preliminary model for high-power-waveguide arcing and arc protection
NASA Technical Reports Server (NTRS)
Yen, H. C.
1978-01-01
The arc protection subsystems that are implemented in the DSN high power transmitters are discussed. The status of present knowledge about waveguide arcs is reviewed in terms of a simple engineering model. A fairly general arc detection scheme is also discussed. Areas where further studies are needed are pointed out along with proposed approaches to the solutions of these problems.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
A Coupling Strategy of FEM and BEM for the Solution of a 3D Industrial Crack Problem
NASA Astrophysics Data System (ADS)
Kouitat Njiwa, Richard; Taha Niane, Ngadia; Frey, Jeremy; Schwartz, Martin; Bristiel, Philippe
2015-03-01
Analyzing crack stability in an industrial context is challenging due to the geometry of the structure. The finite element method is effective for defect-free problems. The boundary element method is effective for problems in simple geometries with singularities. We present a strategy that takes advantage of both approaches. Within the iterative solution procedure, the FEM solves a defect-free problem over the structure while the BEM solves the crack problem over a fictitious domain with simple geometry. The effectiveness of the approach is demonstrated on some simple examples which allow comparison with literature results and on an industrial problem.
Benchmark problems for numerical implementations of phase field models
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...
2016-10-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
A Flipped Pedagogy for Expert Problem Solving
NASA Astrophysics Data System (ADS)
Pritchard, David
The internet provides free learning opportunities for declarative (Wikipedia, YouTube) and procedural (Kahn Academy, MOOCs) knowledge, challenging colleges to provide learning at a higher cognitive level. Our ``Modeling Applied to Problem Solving'' pedagogy for Newtonian Mechanics imparts strategic knowledge - how to systematically determine which concepts to apply and why. Declarative and procedural knowledge is learned online before class via an e-text, checkpoint questions, and homework on edX.org (see http://relate.mit.edu/physicscourse); it is organized into five Core Models. Instructors then coach students on simple ``touchstone problems'', novel exercises, and multi-concept problems - meanwhile exercising three of the four C's: communication, collaboration, critical thinking and problem solving. Students showed 1.2 standard deviations improvement on the MIT final exam after three weeks instruction, a significant positive shift in 7 of the 9 categories in the CLASS, and their grades improved by 0.5 standard deviation in their following physics course (Electricity and Magnetism).
Simple models for estimating local removals of timber in the northeast
David N. Larsen; David A. Gansner
1975-01-01
Provides a practical method of estimating subregional removals of timber and demonstrates its application to a typical problem. Stepwise multiple regression analysis is used to develop equations for estimating removals of softwood, hardwood, and all timber from selected characteristics of socioeconomic structure.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
NASA Astrophysics Data System (ADS)
Fang, Shin-Yi; Smith, Garrett; Tabor, Whitney
2018-04-01
This paper analyses a three-layer connectionist network that solves a translation-invariance problem, offering a novel explanation for transposed letter effects in word reading. Analysis of the hidden unit encodings provides insight into two central issues in cognitive science: (1) What is the novelty of claims of "modality-specific" encodings? and (2) How can a learning system establish a complex internal structure needed to solve a problem? Although these topics (embodied cognition and learnability) are often treated separately, we find a close relationship between them: modality-specific features help the network discover an abstract encoding by causing it to break the initial symmetries of the hidden units in an effective way. While this neural model is extremely simple compared to the human brain, our results suggest that neural networks need not be black boxes and that carefully examining their encoding behaviours may reveal how they differ from classical ideas about the mind-world relationship.
Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Graph cuts for curvature based image denoising.
Bae, Egil; Shi, Juan; Tai, Xue-Cheng
2011-05-01
Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.
Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong
2016-02-01
In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.
Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.; Esmaeili, S.
2015-12-01
We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Recognizing simple polyhedron from a perspective drawing
NASA Astrophysics Data System (ADS)
Zhang, Guimei; Chu, Jun; Miao, Jun
2009-10-01
Existed methods can't be used for recognizing simple polyhedron. In this paper, three problems are researched. First, a method for recognizing triangle and quadrilateral is introduced based on geometry and angle constraint. Then Attribute Relation Graph (ARG) is employed to describe simple polyhedron and line drawing. Last, a new method is presented to recognize simple polyhedron from a line drawing. The method filters the candidate database before matching line drawing and model, thus the recognition efficiency is improved greatly. We introduced the geometrical characteristics and topological characteristics to describe each node of ARG, so the algorithm can not only recognize polyhedrons with different shape but also distinguish between polyhedrons with the same shape but with different sizes and proportions. Computer simulations demonstrate the effectiveness of the method preliminarily.
Emotional autonomy and problem behavior among Chinese adolescents.
Chou, Kee-Lee
2003-12-01
The author examined the association between emotional autonomy and problem behavior among Chinese adolescents living in Hong Kong. The respondents were 512 adolescents, 16 to 18 years of age, who were interviewed for a cross-sectional study. Three dimensions of emotional autonomy including individuation, nondependency on parents, and de-idealization of parents were significantly and positively correlated with the amount of problem behavior the participants engaged in during the past 6 months. Using a simple linear multiple regression model, the author found that problem behavior was associated with only one aspect of emotional autonomy-individuation. Results indicated that the relationship between problem behavior and three aspects of emotional autonomy was similar in both individualistic and collectivistic societies.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Aerodynamics of an airfoil with a jet issuing from its surface
NASA Technical Reports Server (NTRS)
Tavella, D. A.; Karamcheti, K.
1982-01-01
A simple, two dimensional, incompressible and inviscid model for the problem posed by a two dimensional wing with a jet issuing from its lower surface is considered and a parametric analysis is carried out to observe how the aerodynamic characteristics depend on the different parameters. The mathematical problem constitutes a boundary value problem where the position of part of the boundary is not known a priori. A nonlinear optimization approach was used to solve the problem, and the analysis reveals interesting characteristics that may help to better understand the physics involved in more complex situations in connection with high lift systems.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Additive schemes for certain operator-differential equations
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2010-12-01
Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.
Improving Memory for Optimization and Learning in Dynamic Environments
2011-07-01
algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core
NASA Technical Reports Server (NTRS)
Apetre, N. A.; Sankar, B. V.; Ambur, D. R.
2006-01-01
The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.
Edge Detection Based On the Characteristic of Primary Visual Cortex Cells
NASA Astrophysics Data System (ADS)
Zhu, M. M.; Xu, Y. L.; Ma, H. Q.
2018-01-01
Aiming at the problem that it is difficult to balance the accuracy of edge detection and anti-noise performance, and referring to the dynamic and static perceptions of the primary visual cortex (V1) cells, a V1 cell model is established to perform edge detection. A spatiotemporal filter is adopted to simulate the receptive field of V1 simple cells, the model V1 cell is obtained after integrating the responses of simple cells by half-wave rectification and normalization, Then the natural image edge is detected by using static perception of V1 cells. The simulation results show that, the V1 model can basically fit the biological data and has the universality of biology. What’s more, compared with other edge detection operators, the proposed model is more effective and has better robustness
Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2013-05-01
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
Comparison of different approaches of modelling in a masonry building
NASA Astrophysics Data System (ADS)
Saba, M.; Meloni, D.
2017-12-01
The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
Teaching Mendelian Genetics with the Computer.
ERIC Educational Resources Information Center
Small, James W., Jr.
Students in general undergraduate courses in both biology and genetics seem to have great difficulty mastering the basic concepts of Mendelian Genetics and solving even simple problems. In an attempt to correct this situation, students in both courses at Rollins College were introduced to three simulation models of the genetics of the fruit…
Language Management in the Czech Republic
ERIC Educational Resources Information Center
Neustupny, J. V.; Nekvapil, Jiri
2003-01-01
This monograph, based on the Language Management model, provides information on both the "simple" (discourse-based) and "organised" modes of attention to language problems in the Czech Republic. This includes but is not limited to the language policy of the State. This approach does not satisfy itself with discussing problems…
Mineral lineation produced by 3-D rotation of rigid inclusions in confined viscous simple shear
NASA Astrophysics Data System (ADS)
Marques, Fernando O.
2016-08-01
The solid-state flow of rocks commonly produces a parallel arrangement of elongate minerals with their longest axes coincident with the direction of flow-a mineral lineation. However, this does not conform to Jeffery's theory of the rotation of rigid ellipsoidal inclusions (REIs) in viscous simple shear, because rigid inclusions rotate continuously with applied shear. In 2-dimensional (2-D) flow, the REI's greatest axis (e1) is already in the shear direction; therefore, the problem is to find mechanisms that can prevent the rotation of the REI about one axis, the vorticity axis. In 3-D flow, the problem is to find a mechanism that can make e1 rotate towards the shear direction, and so generate a mineral lineation by rigid rotation about two axes. 3-D analogue and numerical modelling was used to test the effects of confinement on REI rotation and, for narrow channels (shear zone thickness over inclusion's least axis, Wr < 2), the results show that: (1) the rotational behaviour deviates greatly from Jeffery's model; (2) inclusions with aspect ratio Ar (greatest over least principle axis, e1/e3) > 1 can rotate backwards from an initial orientation w e1 parallel to the shear plane, in great contrast to Jeffery's model; (3) back rotation is limited because inclusions reach a stable equilibrium orientation; (4) most importantly and, in contrast to Jeffery's model and to the 2-D simulations, in 3-D, the confined REI gradually rotated about an axis orthogonal to the shear plane towards an orientation with e1 parallel to the shear direction, thus producing a lineation parallel to the shear direction. The modelling results lead to the conclusion that confined simple shear can be responsible for the mineral alignment (lineation) observed in ductile shear zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Two-way ANOVA Problems with Simple Numbers.
ERIC Educational Resources Information Center
Read, K. L. Q.; Shihab, L. H.
1998-01-01
Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…
Protein Structure Prediction with Evolutionary Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.; Krasnogor, N.; Pelta, D.A.
1999-02-08
Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.
Discrete is it enough? The revival of Piola-Hencky keynotes to analyze three-dimensional Elastica
NASA Astrophysics Data System (ADS)
Turco, Emilio
2018-04-01
Complex problems such as those concerning the mechanics of materials can be confronted only by considering numerical simulations. Analytical methods are useful to build guidelines or reference solutions but, for general cases of technical interest, they have to be solved numerically, especially in the case of large displacements and deformations. Probably continuous models arose for producing inspiring examples and stemmed from homogenization techniques. These techniques allowed for the solution of some paradigmatic examples but, in general, always require a discretization method for solving problems dictated by the applications. Therefore, and also by taking into account that computing powers are nowadays more largely available and cheap, the question arises: why not using directly a discrete model for 3D beams? In other words, it could be interesting to formulate a discrete model without using an intermediate continuum one, as this last, at the end, has to be discretized in any case. These simple considerations immediately evoke some very basic models developed many years ago when the computing powers were practically inexistent but the problem of finding simple solutions to beam deformation problem was already an emerging one. Actually, in recent years, the keynotes of Hencky and Piola attracted a renewed attention [see, one for all, the work (Turco et al. in Zeitschrift für Angewandte Mathematik und Physik 67(4):1-28, 2016)]: generalizing their results, in the present paper, a novel directly discrete three-dimensional beam model is presented and discussed, in the framework of geometrically nonlinear analysis. Using a stepwise algorithm based essentially on Newton's method to compute the extrapolations and on the Riks' arc-length method to perform the corrections, we could obtain some numerical simulations showing the computational effectiveness of presented model: Indeed, it presents a convenient balance between accuracy and computational cost.
Mathematical Metaphors: Problem Reformulation and Analysis Strategies
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
This paper addresses the critical need for the development of intelligent or assisting software tools for the scientist who is working in the initial problem formulation and mathematical model representation stage of research. In particular, examples of that representation in fluid dynamics and instability theory are discussed. The creation of a mathematical model that is ready for application of certain solution strategies requires extensive symbolic manipulation of the original mathematical model. These manipulations can be as simple as term reordering or as complicated as discovery of various symmetry groups embodied in the equations, whereby Backlund-type transformations create new determining equations and integrability conditions or create differential Grobner bases that are then solved in place of the original nonlinear PDEs. Several examples are presented of the kinds of problem formulations and transforms that can be frequently encountered in model representation for fluids problems. The capability of intelligently automating these types of transforms, available prior to actual mathematical solution, is advocated. Physical meaning and assumption-understanding can then be propagated through the mathematical transformations, allowing for explicit strategy development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c
This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.
Convection Regularization of High Wavenumbers in Turbulence ANS Shocks
2011-07-31
dynamics of particles that adhere to one another upon collision and has been studied as a simple cosmological model for describing the nonlinear formation of...solution we mean a solution to the Cauchy problem in the following sense. Definition 5.1. A function u : R × [0, T ] 7→ RN is a weak solution of the...step 2 the limit function in the α → 0 limit is shown to satisfy the definition of a weak solution for the Cauchy problem. Without loss of generality
On the modelling of shallow turbidity flows
NASA Astrophysics Data System (ADS)
Liapidevskii, Valery Yu.; Dutykh, Denys; Gisclon, Marguerite
2018-03-01
In this study we investigate shallow turbidity density currents and underflows from mechanical point of view. We propose a simple hyperbolic model for such flows. On one hand, our model is based on very basic conservation principles. On the other hand, the turbulent nature of the flow is also taken into account through the energy dissipation mechanism. Moreover, the mixing with the pure water along with sediments entrainment and deposition processes are considered, which makes the problem dynamically interesting. One of the main advantages of our model is that it requires the specification of only two modeling parameters - the rate of turbulent dissipation and the rate of the pure water entrainment. Consequently, the resulting model turns out to be very simple and self-consistent. This model is validated against several experimental data and several special classes of solutions (such as travelling, self-similar and steady) are constructed. Unsteady simulations show that some special solutions are realized as asymptotic long time states of dynamic trajectories.
Development of indirect EFBEM for radiating noise analysis including underwater problems
NASA Astrophysics Data System (ADS)
Kwon, Hyun-Wung; Hong, Suk-Yoon; Song, Jee-Hun
2013-09-01
For the analysis of radiating noise problems in medium-to-high frequency ranges, the Energy Flow Boundary Element Method (EFBEM) was developed. EFBEM is the analysis technique that applies the Boundary Element Method (BEM) to Energy Flow Analysis (EFA). The fundamental solutions representing spherical wave property for radiating noise problems in open field and considering the free surface effect in underwater are developed. Also the directivity factor is developed to express wave's directivity patterns in medium-to-high frequency ranges. Indirect EFBEM by using fundamental solutions and fictitious source was applied to open field and underwater noise problems successfully. Through numerical applications, the acoustic energy density distributions due to vibration of a simple plate model and a sphere model were compared with those of commercial code, and the comparison showed good agreement in the level and pattern of the energy density distributions.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
Norman, Laura M.
2007-01-01
Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.
Remote sensing of solar radiation absorbed and reflected by vegetated land surfaces
NASA Technical Reports Server (NTRS)
Myneni, Ranga B.; Asrar, Ghassem; Tanre, Didier; Choudhury, Bhaskar J.
1992-01-01
1D and 3D radiative-transfer models have been used to investigate the problem of remotely sensed determination of vegetated land surface-absorbed and reflected solar radiation. Calculations were conducted for various illumination conditions to determine surface albedo, soil- and canopy-absorbed photosynthetically active and nonactive radiation, and normalized difference vegetation index. Simple predictive models are developed on the basis of the relationships among these parameters.
Charles H. Luce; David G. Tarboton; Erkan Istanbulluoglu; Robert T. Pack
2005-01-01
Rhodes [2005] brings up some excellent points in his comments on the work of Istanbulluoglu et al. [2004]. We appreciate the opportunity to respond because it is likely that other readers will also wonder how they can apply the relatively simple analysis to important policy questions. Models necessarily reduce the complexity of the problem to make it tractable and...
No Generalization of Practice for Nonzero Simple Addition
ERIC Educational Resources Information Center
Campbell, Jamie I. D.; Beech, Leah C.
2014-01-01
Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.
2008-12-01
Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.
Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model.
Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I
2008-12-01
Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-01-25
Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less
Flowfield computation of entry vehicles
NASA Technical Reports Server (NTRS)
Prabhu, Dinesh K.
1990-01-01
The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.
Damage and strength of composite materials: Trends, predictions, and challenges
NASA Technical Reports Server (NTRS)
Obrien, T. Kevin
1994-01-01
Research on damage mechanisms and ultimate strength of composite materials relevant to scaling issues will be addressed in this viewgraph presentation. The use of fracture mechanics and Weibull statistics to predict scaling effects for the onset of isolated damage mechanisms will be highlighted. The ability of simple fracture mechanics models to predict trends that are useful in parametric or preliminary designs studies will be reviewed. The limitations of these simple models for complex loading conditions will also be noted. The difficulty in developing generic criteria for the growth of these mechanisms needed in progressive damage models to predict strength will be addressed. A specific example for a problem where failure is a direct consequence of progressive delamination will be explored. A damage threshold/fail-safety concept for addressing composite damage tolerance will be discussed.
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
Teufel, Christoph; Fletcher, Paul C
2016-10-01
Computational models have become an integral part of basic neuroscience and have facilitated some of the major advances in the field. More recently, such models have also been applied to the understanding of disruptions in brain function. In this review, using examples and a simple analogy, we discuss the potential for computational models to inform our understanding of brain function and dysfunction. We argue that they may provide, in unprecedented detail, an understanding of the neurobiological and mental basis of brain disorders and that such insights will be key to progress in diagnosis and treatment. However, there are also potential problems attending this approach. We highlight these and identify simple principles that should always govern the use of computational models in clinical neuroscience, noting especially the importance of a clear specification of a model's purpose and of the mapping between mathematical concepts and reality. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.
How Fast Can You Go on a Bicycle?
ERIC Educational Resources Information Center
Dunning, R. B.
2009-01-01
The bicycle provides a context-rich problem accessible to students in a first-year physics course, encircling several core physics principles such as conservation of total energy and angular momentum, dissipative forces, and vectors. In this article, I develop a simple numerical model that can be used by any first-year physics student to…
Developmental Dissociation in the Neural Responses to Simple Multiplication and Subtraction Problems
ERIC Educational Resources Information Center
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a…
ERIC Educational Resources Information Center
Abramovich, Sergei; Pieper, Anne
1996-01-01
Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Systems Engineering of Education V: Quantitative Concepts for Education Systems.
ERIC Educational Resources Information Center
Silvern, Leonard C.
The fifth (of 14) volume of the Education and Training Consultant's (ETC) series on systems engineering of education is designed for readers who have completed others in the series. It reviews arithmetic and algebraic procedures and applies these to simple education and training systems. Flowchart models of example problems are developed and…
Tour of a simple trigonometry problem
NASA Astrophysics Data System (ADS)
Poon, Kin-Keung
2012-06-01
This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was implemented in a high school and the results indicated that students are relatively weak in problem-solving abilities but they understand and appreciate the thinking process in different stages and steps of the activities.
Adaptive Neural Networks for Automatic Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakas, D. P.; Vlachos, D. S.; Simos, T. E.
The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
A simple model of hysteresis behavior using spreadsheet analysis
NASA Astrophysics Data System (ADS)
Ehrmann, A.; Blachowicz, T.
2015-01-01
Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.
Misconceptions of Mexican Teachers in the Solution of Simple Pendulum
ERIC Educational Resources Information Center
Garcia Trujillo, Luis Antonio; Ramirez Díaz, Mario H.; Rodriguez Castillo, Mario
2013-01-01
Solving the position of a simple pendulum at any time is apparently one of the most simple and basic problems to solve in high school and college physics courses. However, because of this apparent simplicity, teachers and physics texts often assume that the solution is immediate without pausing to reflect on the problem formulation or verifying…
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
A charge-based model of Junction Barrier Schottky rectifiers
NASA Astrophysics Data System (ADS)
Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali
2018-06-01
A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
NASA Astrophysics Data System (ADS)
Watkins, N. W.; Rosenberg, S.; Sanchez, R.; Chapman, S. C.; Credgington, D.
2008-12-01
Since the 1960s Mandelbrot has advocated the use of fractals for the description of the non-Euclidean geometry of many aspects of nature. In particular he proposed two kinds of model to capture persistence in time (his Joseph effect, common in hydrology and with fractional Brownian motion as the prototype) and/or prone to heavy tailed jumps (the Noah effect, typical of economic indices, for which he proposed Lévy flights as an exemplar). Both effects are now well demonstrated in space plasmas, notably in the turbulent solar wind. Models have, however, typically emphasised one of the Noah and Joseph parameters (the Lévy exponent μ and the temporal exponent β) at the other's expense. I will describe recent work in which we studied a simple self-affine stable model-linear fractional stable motion, LFSM, which unifies both effects and present a recently-derived diffusion equation for LFSM. This replaces the second order spatial derivative in the equation of fBm with a fractional derivative of order μ, but retains a diffusion coefficient with a power law time dependence rather than a fractional derivative in time. I will also show work in progress using an LFSM model and simple analytic scaling arguments to study the problem of the area between an LFSM curve and a threshold. This problem relates to the burst size measure introduced by Takalo and Consolini into solar-terrestrial physics and further studied by Freeman et al [PRE, 2000] on solar wind Poynting flux near L1. We test how expressions derived by other authors generalise to the non-Gaussian, constant threshold problem. Ongoing work on extension of these LFSM results to multifractals will also be discussed.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru
We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.
Computer modeling of inversion layer MOS solar cells and arrays
NASA Technical Reports Server (NTRS)
Ho, Fat Duen
1991-01-01
A two dimensional numerical model of the inversion layer metal insulator semiconductor (IL/MIS) solar cell is proposed by using the finite element method. The two-dimensional current flow in the device is taken into account in this model. The electrostatic potential distribution, the electron concentration distribution, and the hole concentration distribution for different terminal voltages are simulated. The results of simple calculation are presented. The existing problems for this model are addressed. Future work is proposed. The MIS structures are studied and some of the results are reported.
Nonlinear field equations for aligning self-propelled rods.
Peshkov, Anton; Aranson, Igor S; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-12-28
We derive a set of minimal and well-behaved nonlinear field equations describing the collective properties of self-propelled rods from a simple microscopic starting point, the Vicsek model with nematic alignment. Analysis of their linear and nonlinear dynamics shows good agreement with the original microscopic model. In particular, we derive an explicit expression for density-segregated, banded solutions, allowing us to develop a more complete analytic picture of the problem at the nonlinear level.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
An analytic solution of the stochastic storage problem applicable to soil water
Milly, P.C.D.
1993-01-01
The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.
NASA Astrophysics Data System (ADS)
Halbrügge, Marc
2010-12-01
This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
Simple arithmetic: not so simple for highly math anxious individuals.
Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-12-01
Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.
Simple arithmetic: not so simple for highly math anxious individuals
Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-01-01
Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499
A system for routing arbitrary directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1987-01-01
There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
Numerical simulation of damage evolution for ductile materials and mechanical properties study
NASA Astrophysics Data System (ADS)
El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.
2015-12-01
This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.
Capacity-constrained traffic assignment in networks with residual queues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, W.H.K.; Zhang, Y.
2000-04-01
This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less
Random functions via Dyson Brownian Motion: progress and problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Gaoyuan; Battefeld, Thorsten
2016-09-05
We develope a computationally efficient extension of the Dyson Brownian Motion (DBM) algorithm to generate random function in C{sup 2} locally. We further explain that random functions generated via DBM show an unstable growth as the traversed distance increases. This feature restricts the use of such functions considerably if they are to be used to model globally defined ones. The latter is the case if one uses random functions to model landscapes in string theory. We provide a concrete example, based on a simple axionic potential often used in cosmology, to highlight this problem and also offer an ad hocmore » modification of DBM that suppresses this growth to some degree.« less
The ANMLite Language and Logic for Specifying Planning Problems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Siminiceanu, Radu I.; Munoz, Cesar A.
2007-01-01
We present the basic concepts of the ANMLite planning language. We discuss various aspects of specifying a plan in terms of constraints and checking the existence of a solution with the help of a model checker. The constructs of the ANMLite language have been kept as simple as possible in order to reduce complexity and simplify the verification problem. We illustrate the language with a specification of the space shuttle crew activity model that was constructed under the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project. The main purpose of this study was to explore the implications of choosing a robust logic behind the specification of constraints, rather than simply proposing a new planning language.
A collision probability analysis of the double-heterogeneity problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hebert, A.
1993-10-01
A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less
Kinematic analysis of asymmetric folds in competent layers using mathematical modelling
NASA Astrophysics Data System (ADS)
Aller, J.; Bobillo-Ares, N. C.; Bastida, F.; Lisle, R. J.; Menéndez, C. O.
2010-08-01
Mathematical 2D modelling of asymmetric folds is carried out by applying a combination of different kinematic folding mechanisms: tangential longitudinal strain, flexural flow and homogeneous deformation. The main source of fold asymmetry is discovered to be due to the superimposition of a general homogeneous deformation on buckle folds that typically produces a migration of the hinge point. Forward modelling is performed mathematically using the software 'FoldModeler', by the superimposition of simple shear or a combination of simple shear and irrotational strain on initial buckle folds. The resulting folds are Ramsay class 1C folds, comparable to those formed by symmetric flattening, but with different length of limbs and layer thickness asymmetry. Inverse modelling is made by fitting the natural fold to a computer-simulated fold. A problem of this modelling is the search for the most appropriate homogeneous deformation to be superimposed on the initial fold. A comparative analysis of the irrotational and rotational deformations is made in order to find the deformation which best simulates the shapes and attitudes of natural folds. Modelling of recumbent folds suggests that optimal conditions for their development are: a) buckling in a simple shear regime with a sub-horizontal shear direction and layering gently dipping towards this direction; b) kinematic amplification due to superimposition of a combination of simple shear and irrotational strain with a sub-vertical maximum shortening direction for the latter component. The modelling shows that the amount of homogeneous strain necessary for the development of recumbent folds is much less when an irrotational strain component is superimposed at this stage that when the superimposed strain is only simple shear. In nature, the amount of the irrotational strain component probably increases during the development of the fold as a consequence of the increasing influence of the gravity due to the tectonic superimposition of rocks.
NASA Technical Reports Server (NTRS)
Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.
2009-01-01
The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.
On-line Model Structure Selection for Estimation of Plasma Boundary in a Tokamak
NASA Astrophysics Data System (ADS)
Škvára, Vít; Šmídl, Václav; Urban, Jakub
2015-11-01
Control of the plasma field in the tokamak requires reliable estimation of the plasma boundary. The plasma boundary is given by a complex mathematical model and the only available measurements are responses of induction coils around the plasma. For the purpose of boundary estimation the model can be reduced to simple linear regression with potentially infinitely many elements. The number of elements must be selected manually and this choice significantly influences the resulting shape. In this paper, we investigate the use of formal model structure estimation techniques for the problem. Specifically, we formulate a sparse least squares estimator using the automatic relevance principle. The resulting algorithm is a repetitive evaluation of the least squares problem which could be computed in real time. Performance of the resulting algorithm is illustrated on simulated data and evaluated with respect to a more detailed and computationally costly model FREEBIE.
Mathematical modeling of spinning elastic bodies for modal analysis.
NASA Technical Reports Server (NTRS)
Likins, P. W.; Barbera, F. J.; Baddeley, V.
1973-01-01
The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.
Optimization of Regional Geodynamic Models for Mantle Dynamics
NASA Astrophysics Data System (ADS)
Knepley, M.; Isaac, T.; Jadamec, M. A.
2016-12-01
The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.
Pore-scale modeling of moving contact line problems in immiscible two-phase flow
NASA Astrophysics Data System (ADS)
Kucala, Alec; Noble, David; Martinez, Mario
2016-11-01
Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). Here, we present a model for the moving contact line using pore-scale computational fluid dynamics (CFD) which solves the full, time-dependent Navier-Stokes equations using the Galerkin finite-element method. The MCL is modeled as a surface traction force proportional to the surface tension, dependent on the static properties of the immiscible fluid/solid system. We present a variety of verification test cases for simple two- and three-dimensional geometries to validate the current model, including threshold pressure predictions in flows through pore-throats for a variety of wetting angles. Simulations involving more complex geometries are also presented to be used in future simulations for GCS and EOR problems. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Philip LaRoche
At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less
Rethinking Use of the OML Model in Electric Sail Development
NASA Technical Reports Server (NTRS)
Stone, Nobie H.
2016-01-01
In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
A simple model of the effect of ocean ventilation on ocean heat uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Urban, Nathan Mark
Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less
The 1/ N Expansion of Tensor Models Beyond Perturbation Theory
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2014-09-01
We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
The Pendulum: A Paradigm for the Linear Oscillator
ERIC Educational Resources Information Center
Newburgh, Ronald
2004-01-01
The simple pendulum is a model for the linear oscillator. The usual mathematical treatment of the problem begins with a differential equation that one solves with the techniques of the differential calculus, a formal process that tends to obscure the physics. In this paper we begin with a kinematic description of the motion obtained by experiment…
ERIC Educational Resources Information Center
Wholeben, Brent E.
This volume is an exposition of a mathematical modeling technique for use in the evaluation and solution of complex educational problems at all levels. It explores in detail the application of simple algebraic techniques to such issues as program reduction, fiscal rollbacks, and computer curriculum planning. Part I ("Introduction to the…
Symmetries of relativistic world lines
NASA Astrophysics Data System (ADS)
Koch, Benjamin; Muñoz, Enrique; Reyes, Ignacio A.
2017-10-01
Symmetries are essential for a consistent formulation of many quantum systems. In this paper we discuss a fundamental symmetry, which is present for any Lagrangian term that involves x˙2. As a basic model that incorporates the fundamental symmetries of quantum gravity and string theory, we consider the Lagrangian action of the relativistic point particle. A path integral quantization for this seemingly simple system has long presented notorious problems. Here we show that those problems are overcome by taking into account the additional symmetry, leading directly to the exact Klein-Gordon propagator.
1986-03-31
Martins, J.A.C. and Campos , L.T. [1986], "Existence and Local Uniqueness of Solutions to Contact Problems in Elasticity with Nonlinear Friction...noisy and ttoubl esome vibt.t4ons. If the sound generated by the friction-induced oscillations of Rviolin strings may be the delight of all music lovers...formulation. See 0den and Martins - [1985] and Rabier, Martins, Oden and Campos [1986]. - It is now simple to show, in a 6o’uman manner, that, for
Tracking and Control of a Neutral Particle Beam Using Multiple Model Adaptive Meer Filter.
1987-12-01
34 method incorporated by Zicker in 1983 [32]. Once the beam estimation problem had been solved, the problem of beam control was examined. Zicker conducted a...filter. Then, the methods applied by Meer, and later Zicker , to reduce the computational load of a simple Meer filter, will be presented. 2.5.1 Basic...number of possible methods to prune the hypothesis tree and chose the "Best Half Method" as the most viable (21). Zicker [323, applied the work of Weiss
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
When water meets behavioral economics (or: it is not all about money!)
NASA Astrophysics Data System (ADS)
Escriva-Bou, A.
2014-12-01
Water engineers do not like people; we are better with numbers, equations and models where people behavior is only a variable, usually constant, or in the best case a probabilistic approximation. On the other side, most economic studies relate to people's behavior, and when economists develop engineering-based models, engineers usually think that econometric approaches are too simple to represent complex systems that engineers like to work with. Besides this simple-minded cliche, there is a lot of field to explore in the intersections of both disciplines. Even though the development of infrastructure cost-benefit analyses after Dupuit's work, or the more recent growth of hydroeconomic modeling, we are still missing a lot of potential synergic benefits from integrating behavioral economics and water infrastructure design and management. To present a simple example: urban water infrastructure design is based on water peaks, so reservoirs, pump stations and pipe dimensions have to be built to serve these peaks; water-related energy assessment studies have shown that there is a lot of energy used for every drop of water used in our houses, farms, and industries, and energy peaks are even larger that water peaks, creating expensive troubles for energy supply; and all this energy consumption means greenhouse gas emissions. Therefore we agree that reducing water peaks might create a lot of benefits, but could water customers change their behavior? Which incentives do they need? It is only about money, or it may be managed with better information? Beyond this example there are many other promising economic topics that could help in our daily water problems, such as: game theoretic approaches to understand decisions; science-based agent models that help us to understand the performance of a system as the sum of agents' actions and interactions; or the analysis of institutional-driven management to avoid the tragedy of the commons that depletes groundwater resources globally. And no need to remind that all resource scarcity problems will increase with population growth, so it would be better to begin work sooner on these problems.
Greedy algorithms and Zipf laws
NASA Astrophysics Data System (ADS)
Moran, José; Bouchaud, Jean-Philippe
2018-04-01
We consider a simple model of firm/city/etc growth based on a multi-item criterion: whenever entity B fares better than entity A on a subset of M items out of K, the agent originally in A moves to B. We solve the model analytically in the cases K = 1 and . The resulting stationary distribution of sizes is generically a Zipf-law provided M > K/2. When , no selection occurs and the size distribution remains thin-tailed. In the special case M = K, one needs to regularize the problem by introducing a small ‘default’ probability ϕ. We find that the stationary distribution has a power-law tail that becomes a Zipf-law when . The approach to the stationary state can also be characterized, with strong similarities with a simple ‘aging’ model considered by Barrat and Mézard.
Cohen, Timothy; Craig, Nathaniel; Knapen, Simon
2016-03-15
We propose a simple model of split supersymmetry from gauge mediation. This model features gauginos that are parametrically a loop factor lighter than scalars, accommodates a Higgs boson mass of 125 GeV, and incorporates a simple solution to the μ–b μ problem. The gaugino mass suppression can be understood as resulting from collective symmetry breaking. Imposing collider bounds on μ and requiring viable electroweak symmetry breaking implies small a-terms and small tan β — the stop mass ranges from 10 5 to 10 8 GeV. In contrast with models with anomaly + gravity mediation (which also predict a one-loop loopmore » suppression for gaugino masses), our gauge mediated scenario predicts aligned squark masses and a gravitino LSP. Gluinos, electroweakinos and Higgsinos can be accessible at the LHC and/or future colliders for a wide region of the allowed parameter space.« less
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Influence of the Mesh Geometry Evolution on Gearbox Dynamics during Its Maintenance
NASA Astrophysics Data System (ADS)
Dąbrowski, Z.; Dziurdź, J.; Klekot, G.
2017-12-01
Toothed gears constitute the necessary elements of power transmission systems. They are applied as stationary devices in drive systems of road vehicles, ships and crafts as well as airplanes and helicopters. One of the problems related to the toothed gears usage is the determination of their technical state or its evolutions. Assuming that the gear slippage velocity is attributed to vibrations and noises generated by cooperating toothed wheels, the application of a simple cooperation model of rolled wheels of skew teeth is proposed for the analysis of the mesh evolution influence on the gear dynamics. In addition, an example of utilising an ordinary coherence function for investigating evolutionary mesh changes related to the effects impossible to be described by means of the simple kinematic model is presented.
Kim, Y S; Balland, V; Limoges, B; Costentin, C
2017-07-21
Cyclic voltammetry is a particularly useful tool for characterizing charge accumulation in conductive materials. A simple model is presented to evaluate proton transport effects on charge storage in conductive materials associated with a redox process coupled with proton insertion in the bulk material from an aqueous buffered solution, a situation frequently encountered in metal oxide materials. The interplay between proton transport inside and outside the materials is described using a formulation of the problem through introduction of dimensionless variables that allows defining the minimum number of parameters governing the cyclic voltammetry response with consideration of a simple description of the system geometry. This approach is illustrated by analysis of proton insertion in a mesoporous TiO 2 film.
Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos
NASA Astrophysics Data System (ADS)
Meng, Lili; Li, Xinfu; Zhang, Guang
2017-12-01
In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.
A simple level set method for solving Stefan problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S.; Merriman, B.; Osher, S.
1997-07-15
Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.
Kilic, Mustafa Sabri; Bazant, Martin Z; Ajdari, Armand
2007-02-01
In situations involving large potentials or surface charges, the Poisson-Boltzman (PB) equation has shortcomings because it neglects ion-ion interactions and steric effects. This has been widely recognized by the electrochemistry community, leading to the development of various alternative models resulting in different sets "modified PB equations," which have had at least qualitative success in predicting equilibrium ion distributions. On the other hand, the literature is scarce in terms of descriptions of concentration dynamics in these regimes. Here, adapting strategies developed to modify the PB equation, we propose a simple modification of the widely used Poisson-Nernst-Planck (PNP) equations for ionic transport, which at least qualitatively accounts for steric effects. We analyze numerical solutions of these modified PNP equations on the model problem of the charging of a simple electrolyte cell, and compare the outcome to that of the standard PNP equations. Finally, we repeat the asymptotic analysis of Bazant, Thornton, and Ajdari [Phys. Rev. E 70, 021506 (2004)] for this new system of equations to further document the interest and limits of validity of the simpler equivalent electrical circuit models introduced in Part I [Kilic, Bazant, and Ajdari, Phys. Rev. E 75, 021502 (2007)] for such problems.
A Transportation Model for a Space Colonization and Manufacturing System: A Q-GERT Simulation.
1982-12-01
34 .- •..................................."............ ;,,,=, ; ,, =,..,t , .. =-- j -’ - 24. Heppenheimer , Thomas A . rnLnnies in Space. Harrisburg, Pa...Colonel Thomas D. Clark. Captain John D. Rask, my co-worker on that project, and I developed a simple model for the transportation system during this...K. O’Neill and Thomas A . Heppenhiemer. (An example of a Delphi for a space problem is given in Ref 8.) Some of the parameters needing 78 .* better, or
NASA Technical Reports Server (NTRS)
Englert, G. W.
1971-01-01
A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.
Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2016-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.
Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2015-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.
Randomized shortest-path problems: two related models.
Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh
2009-08-01
This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.
An Introduction to Magnetospheric Physics by Means of Simple Models
NASA Technical Reports Server (NTRS)
Stern, D. P.
1981-01-01
The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.
A Bayesian network model for predicting pregnancy after in vitro fertilization.
Corani, G; Magli, C; Giusti, A; Gianaroli, L; Gambardella, L M
2013-11-01
We present a Bayesian network model for predicting the outcome of in vitro fertilization (IVF). The problem is characterized by a particular missingness process; we propose a simple but effective averaging approach which improves parameter estimates compared to the traditional MAP estimation. We present results with generated data and the analysis of a real data set. Moreover, we assess by means of a simulation study the effectiveness of the model in supporting the selection of the embryos to be transferred. © 2013 Elsevier Ltd. All rights reserved.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
NASA Astrophysics Data System (ADS)
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
Overview and extensions of a system for routing directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.
Fundamental studies of structure borne noise for advanced turboprop applications
NASA Technical Reports Server (NTRS)
Eversman, W.; Koval, L. R.
1985-01-01
The transmission of sound generated by wing-mounted, advanced turboprop engines into the cabin interior via structural paths is considered. The structural model employed is a beam representation of the wing box carried into the fuselage via a representative frame type of carry through structure. The structure for the cabin cavity is a stiffened shell of rectangular or cylindrical geometry. The structure is modelled using a finite element formulation and the acoustic cavity is modelled using an analytical representation appropriate for the geometry. The structural and acoustic models are coupled by the use of hard wall cavity modes for the interior and vacuum structural modes for the shell. The coupling is accomplished using a combination of analytical and finite element models. The advantage is the substantial reduction in dimensionality achieved by modelling the interior analytically. The mathematical model for the interior noise problem is demonstrated with a simple plate/cavity system which has all of the features of the fuselage interior noise problem.
Polymer Fluid Dynamics: Continuum and Molecular Approaches.
Bird, R B; Giacomin, A J
2016-06-07
To solve problems in polymer fluid dynamics, one needs the equations of continuity, motion, and energy. The last two equations contain the stress tensor and the heat-flux vector for the material. There are two ways to formulate the stress tensor: (a) One can write a continuum expression for the stress tensor in terms of kinematic tensors, or (b) one can select a molecular model that represents the polymer molecule and then develop an expression for the stress tensor from kinetic theory. The advantage of the kinetic theory approach is that one gets information about the relation between the molecular structure of the polymers and the rheological properties. We restrict the discussion primarily to the simplest stress tensor expressions or constitutive equations containing from two to four adjustable parameters, although we do indicate how these formulations may be extended to give more complicated expressions. We also explore how these simplest expressions are recovered as special cases of a more general framework, the Oldroyd 8-constant model. Studying the simplest models allows us to discover which types of empiricisms or molecular models seem to be worth investigating further. We also explore equivalences between continuum and molecular approaches. We restrict the discussion to several types of simple flows, such as shearing flows and extensional flows, which are of greatest importance in industrial operations. Furthermore, if these simple flows cannot be well described by continuum or molecular models, then it is not necessary to lavish time and energy to apply them to more complex flow problems.
An assessment on convective and radiative heat transfer modelling in tubular solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Sánchez, D.; Muñoz, A.; Sánchez, T.
Four models of convective and radiative heat transfer inside tubular solid oxide fuel cells are presented in this paper, all of them applicable to multidimensional simulations. The work is aimed at assessing if it is necessary to use a very detailed and complicated model to simulate heat transfer inside this kind of device and, for those cases when simple models can be used, the errors are estimated and compared to those of the more complex models. For the convective heat transfer, two models are presented. One of them accounts for the variation of film coefficient as a function of local temperature and composition. This model gives a local value for the heat transfer coefficients and establishes the thermal entry length. The second model employs an average value of the transfer coefficient, which is applied to the whole length of the duct being studied. It is concluded that, unless there is a need to calculate local temperatures, a simple model can be used to evaluate the global performance of the cell with satisfactory accuracy. For the radiation heat transfer, two models are presented again. One of them considers radial radiation exclusively and, thus, radiative exchange between adjacent cells is neglected. On the other hand, the second model accounts for radiation in all directions but increases substantially the complexity of the problem. For this case, it is concluded that deviations between both models are higher than for convection. Actually, using a simple model can lead to a not negligible underestimation of the temperature of the cell.
On Matrices, Automata, and Double Counting
NASA Astrophysics Data System (ADS)
Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.
Automatic Control via Thermostats of a Hyperbolic Stefan Problem with Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colli, P.; Grasselli, M.; Sprekels, J.
1999-03-15
A hyperbolic Stefan problem based on the linearized Gurtin-Pipkin heat conduction law is considered. The temperature and free boundary are controlled by a thermostat acting on the boundary. This feedback control is based on temperature measurements performed by real thermal sensors located within the domain containing the two-phase system and/or at its boundary. Three different types of thermostats are analyzed: simple switch, relay switch, and a Preisach hysteresis operator. The resulting models lead to integrodifferential hyperbolic Stefan problems with nonlinear and nonlocal boundary conditions. Existence results are proved in all the cases. Uniqueness is also shown, except in the situationmore » corresponding to the ideal switch.« less
REVIEWS OF TOPICAL PROBLEMS: Axisymmetric stationary flows in compact astrophysical objects
NASA Astrophysics Data System (ADS)
Beskin, Vasilii S.
1997-07-01
A review is presented of the analytical results available for a large class of axisymmetric stationary flows in the vicinity of compact astrophysical objects. The determination of the two-dimensional structure of the poloidal magnetic field (hydrodynamic flow field) faces severe difficulties, due to the complexity of the trans-field equation for stationary axisymmetric flows. However, an approach exists which enables direct problems to be solved even within the balance law framework. This possibility arises when an exact solution to the equation is available and flows close to it are investigated. As a result, with the use of simple model problems, the basic features of supersonic flows past real compact objects are determined.
Finite horizon optimum control with and without a scrap value
NASA Astrophysics Data System (ADS)
Neck, R.; Blueschke-Nikolaeva, V.; Blueschke, D.
2017-06-01
In this paper, we study the effects of scrap values on the solutions of optimal control problems with finite time horizon. We show how to include a scrap value, either for the state variables or for the state and the control variables, in the OPTCON2 algorithm for the optimal control of dynamic economic systems. We ask whether the introduction of a scrap value can serve as a substitute for an infinite horizon in economic policy optimization problems where the latter option is not available. Using a simple numerical macroeconomic model, we demonstrate that the introduction of a scrap value cannot induce control policies which can be expected for problems with an infinite time horizon.
Brief introductory guide to agent-based modeling and an illustration from urban health research.
Auchincloss, Amy H; Garcia, Leandro Martin Totaro
2015-11-01
There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.
Brief introductory guide to agent-based modeling and an illustration from urban health research
Auchincloss, Amy H.; Garcia, Leandro Martin Totaro
2017-01-01
There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation. PMID:26648364
NASA Astrophysics Data System (ADS)
Johnson, Susan K.; Stewart, Jim
2002-07-01
In this paper we describe the model-revising problem-solving strategies of two groups of students (one successful, one unsuccessful) as they worked (in a genetics course we developed) to revise Mendel's simple dominance model to explain the inheritance of a trait expressed in any of four variations. The two groups described in this paper were chosen with the intent that the strategies that they employed be used to inform the design of model-based instruction. Differences were found in the groups' abilities to recognize anomalous data, use existing models as templates for revisions, and assess revised models.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
Driving Parameters for Distributed and Centralized Air Transportation Architectures
NASA Technical Reports Server (NTRS)
Feron, Eric
2001-01-01
This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.
The contact sport of rough surfaces
NASA Astrophysics Data System (ADS)
Carpick, Robert W.
2018-01-01
Describing the way two surfaces touch and make contact may seem simple, but it is not. Fully describing the elastic deformation of ideally smooth contacting bodies, under even low applied pressure, involves second-order partial differential equations and fourth-rank elastic constant tensors. For more realistic rough surfaces, the problem becomes a multiscale exercise in surface-height statistics, even before including complex phenomena such as adhesion, plasticity, and fracture. A recent research competition, the “Contact Mechanics Challenge” (1), was designed to test various approximate methods for solving this problem. A hypothetical rough surface was generated, and the community was invited to model contact with this surface with competing theories for the calculation of properties, including contact area and pressure. A supercomputer-generated numerical solution was kept secret until competition entries were received. The comparison of results (2) provides insights into the relative merits of competing models and even experimental approaches to the problem.
NASA Astrophysics Data System (ADS)
Sang, Nguyen Anh; Thu Thuy, Do Thi; Loan, Nguyen Thi Ha; Lan, Nguyen Tri; Viet, Nguyen Ai
2017-06-01
Using the simple deformed three-level model (D3L model) proposed in our early work, we study the entanglement problem of composite bosons. Consider three first energy levels are known, we can get two energy separations, and can define the level deformation parameter δ. Using connection between q-deformed harmonic oscillator and Morse-like anharmonic potential, the deform parameter q also can be derived explicitly. Like the Einstein’s theory of special relativity, we introduce the observer e˙ects: out side observer (looking from outside the studying system) and inside observer (looking inside the studying system). Corresponding to those observers, the outside entanglement entropy and inside entanglement entropy will be defined.. Like the case of Foucault pendulum in the problem of Earth rotation, our deformation energy level investigation might be useful in prediction the environment e˙ect outside a confined box.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
NASA Astrophysics Data System (ADS)
Munahefi, D. N.; Waluya, S. B.; Rochmad
2018-03-01
The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.
Dark matter stability and one-loop neutrino mass generation based on Peccei-Quinn symmetry
NASA Astrophysics Data System (ADS)
Suematsu, Daijiro
2018-01-01
We propose a model which is a simple extension of the KSVZ invisible axion model with an inert doublet scalar. Peccei-Quinn symmetry forbids tree-level neutrino mass generation and its remnant Z_2 symmetry guarantees dark matter stability. The neutrino masses are generated by one-loop effects as a result of the breaking of Peccei-Quinn symmetry through a nonrenormalizable interaction. Although the low energy effective model coincides with an original scotogenic model which contains right-handed neutrinos with large masses, it is free from the strong CP problem.
Shift scheduling model considering workload and worker’s preference for security department
NASA Astrophysics Data System (ADS)
Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT
2018-04-01
Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.
Method for the simulation of blood platelet shape and its evolution during activation
Muliukov, Artem R.; Litvinenko, Alena L.; Nekrasov, Vyacheslav M.; Chernyshev, Andrei V.; Maltsev, Valeri P.
2018-01-01
We present a simple physically based quantitative model of blood platelet shape and its evolution during agonist-induced activation. The model is based on the consideration of two major cytoskeletal elements: the marginal band of microtubules and the submembrane cortex. Mathematically, we consider the problem of minimization of surface area constrained to confine the marginal band and a certain cellular volume. For resting platelets, the marginal band appears as a peripheral ring, allowing for the analytical solution of the minimization problem. Upon activation, the marginal band coils out of plane and forms 3D convoluted structure. We show that its shape is well approximated by an overcurved circle, a mathematical concept of closed curve with constant excessive curvature. Possible mechanisms leading to such marginal band coiling are discussed, resulting in simple parametric expression for the marginal band shape during platelet activation. The excessive curvature of marginal band is a convenient state variable which tracks the progress of activation. The cell surface is determined using numerical optimization. The shapes are strictly mathematically defined by only three parameters and show good agreement with literature data. They can be utilized in simulation of platelets interaction with different physical fields, e.g. for the description of hydrodynamic and mechanical properties of platelets, leading to better understanding of platelets margination and adhesion and thrombus formation in blood flow. It would also facilitate precise characterization of platelets in clinical diagnosis, where a novel optical model is needed for the correct solution of inverse light-scattering problem. PMID:29518073
Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V
2010-04-01
Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.
Modeling the densification of metal matrix composite monotape
NASA Technical Reports Server (NTRS)
Elzey, D. M.; Wadley, H. N. G.
1993-01-01
We present a first model that enables prediction of the density (and its time evolution) of a monotape lay-up subjected to a hot isostatic or vacuum hot pressing consolidation cycle. Our approach is to break down the complicated (and probabilistic) consolidation problem into simple, analyzable parts and to combine them in a way that correctly represents the statistical aspects of the problem, the change in the problem's interior geometry, and the evolving contributions of the different deformation mechanisms. The model gives two types of output. One is in the form of maps showing the relative density dependence upon pressure, temperature, and time for step function temperature and pressure cycles. They are useful for quickly determining the best place to begin developing an optimized process. The second gives the evolution of density over time for any (arbitrary) applied temperature and pressure cycle. This has promise for refining process cycles and possibly for process control. Examples of the models application are given for Ti3Al + Nb, gamma TiAl, Ti6Al4V, and pure aluminum.
Selection of Two-Phase Flow Patterns at a Simple Junction in Microfluidic Devices
NASA Astrophysics Data System (ADS)
Engl, W.; Ohata, K.; Guillot, P.; Colin, A.; Panizza, P.
2006-04-01
We study the behavior of a confined stream made of two immiscible fluids when it reaches a T junction. Two flow patterns are witnessed: the stream is either directed in only one sidearm, yielding a preferential flow pathway for the dispersed phase, or splits between both. We show that the selection of these patterns is not triggered by the shape of the junction nor by capillary effects, but results from confinement. It can be anticipated in terms of the hydrodynamic properties of the flow. A simple model yielding universal behavior in terms of the relevant adimensional parameters of the problem is presented and discussed.
Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro
PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.
The nature of the colloidal 'glass' transition.
Dawson, Kenneth A; Lawlor, A; DeGregorio, Paolo; McCullagh, Gavin D; Zaccarelli, Emanuela; Foffi, Giuseppe; Tartaglia, Piero
2003-01-01
The dynamically arrested state of matter is discussed in the context of athermal systems, such as the hard sphere colloidal arrest. We believe that the singular dynamical behaviour near arrest expressed, for example, in how the diffusion constant vanishes may be 'universal', in a sense to be discussed in the paper. Based on this we argue the merits of studying the problem with simple lattice models. This, by analogy with the the critical point of the Ising model, should lead us to clarify the questions, and begin the program of establishing the degree of universality to be expected. We deal only with 'ideal' athermal dynamical arrest transitions, such as those found for hard sphere systems. However, it is argued that dynamically available volume (DAV) is the relevant order parameter of the transition, and that universal mechanisms may be well expressed in terms of DAV. For simple lattice models we give examples of simple laws that emerge near the dynamical arrest, emphasising the idea of a near-ideal gas of 'holes', interacting to give the power law diffusion constant scaling near the arrest. We also seek to open the discussion of the possibility of an underlying weak coupling theory of the dynamical arrest transition, based on DAV.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
The Motion of a Leaking Oscillator: A Study for the Physics Class
ERIC Educational Resources Information Center
Rodrigues, Hilário; Panza, Nelson; Portes, Dirceu; Soares, Alexandre
2014-01-01
This paper is essentially about the general form of Newton's second law for variable mass problems. We develop a model for describing the motion of the one-dimensional oscillator with a variable mass within the framework of classroom physics. We present a simple numerical procedure for the solution of the equation of motion of the system to…
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Application of the urban mixing-depth concept to air pollution problems
Peter W. Summers
1977-01-01
A simple urban mixing-depth model is used to develop an indicator of downtown pollution concentrations based on emission strength, rural temperature lapse rate, wind speed, city heat input, and city size. It is shown that the mean annual downtown suspended particulate levels in Canadian cities are proportional to the fifth root of the population. The implications of...
NASA Astrophysics Data System (ADS)
Bin Hassan, M. F.; Bonello, P.
2017-05-01
Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.
Ostrom, Elinor; Janssen, Marco A.; Anderies, John M.
2007-01-01
In the context of governance of human–environment interactions, a panacea refers to a blueprint for a single type of governance system (e.g., government ownership, privatization, community property) that is applied to all environmental problems. The aim of this special feature is to provide theoretical analysis and empirical evidence to caution against the tendency, when confronted with pervasive uncertainty, to believe that scholars can generate simple models of linked social–ecological systems and deduce general solutions to the overuse of resources. Practitioners and scholars who fall into panacea traps falsely assume that all problems of resource governance can be represented by a small set of simple models, because they falsely perceive that the preferences and perceptions of most resource users are the same. Readers of this special feature will become acquainted with many cases in which panaceas fail. The articles provide an excellent overview of why they fail. Furthermore, the articles in this special feature address how scholars and public officials can increase the prospects for future sustainable resource use by facilitating a diagnostic approach in selecting appropriate starting points for governance and monitoring, as well as by learning from the outcomes of new policies and adapting in light of effective feedback. PMID:17881583
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
High school students' understanding and problem solving in population genetics
NASA Astrophysics Data System (ADS)
Soderberg, Patti D.
This study is an investigation of student understanding of population genetics and how students developed, used and revised conceptual models to solve problems. The students in this study participated in three rounds of problem solving. The first round involved the use of a population genetics model to predict the number of carriers in a population. The second round required them to revise their model of simple dominance population genetics to make inferences about populations containing three phenotype variations. The third round of problem solving required the students to revise their model of population genetics to explain anomalous data where the proportions of males and females with a trait varied significantly. As the students solved problems, they were involved in basic scientific processes as they observed population phenomena, constructed explanatory models to explain the data they observed, and attempted to persuade their peers as to the adequacy of their models. In this study, the students produced new knowledge about the genetics of a trait in a population through the revision and use of explanatory population genetics models using reasoning that was similar to what scientists do. The students learned, used and revised a model of Hardy-Weinberg equilibrium to generate and test hypotheses about the genetics of phenotypes given only population data. Students were also interviewed prior to and following instruction. This study suggests that a commonly held intuitive belief about the predominance of a dominant variation in populations is resistant to change, despite instruction and interferes with a student's ability to understand Hardy-Weinberg equilibrium and microevolution.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
NASA Astrophysics Data System (ADS)
Lonsdale, R. D.; Webster, R.
This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.
Occupancy models to study wildlife
Bailey, Larissa; Adams, Michael John
2005-01-01
Many wildlife studies seek to understand changes or differences in the proportion of sites occupied by a species of interest. These studies are hampered by imperfect detection of these species, which can result in some sites appearing to be unoccupied that are actually occupied. Occupancy models solve this problem and produce unbiased estimates of occupancy and related parameters. Required data (detection/non-detection information) are relatively simple and inexpensive to collect. Software is available free of charge to aid investigators in occupancy estimation.
Route Prediction on Tracking Data to Location-Based Services
NASA Astrophysics Data System (ADS)
Petróczi, Attila István; Gáspár-Papanek, Csaba
Wireless networks have become so widespread, it is beneficial to determine the ability of cellular networks for localization. This property enables the development of location-based services, providing useful information. These services can be improved by route prediction under the condition of using simple algorithms, because of the limited capabilities of mobile stations. This study gives alternative solutions for this problem of route prediction based on a specific graph model. Our models provide the opportunity to reach our destinations with less effort.
Geary, D C; Frensch, P A; Wiley, J G
1993-06-01
Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.
Prediction on carbon dioxide emissions based on fuzzy rules
NASA Astrophysics Data System (ADS)
Pauzi, Herrini; Abdullah, Lazim
2014-06-01
There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.
Decomposition of the compound Atwood machine
NASA Astrophysics Data System (ADS)
Lopes Coelho, R.
2017-11-01
Non-standard solving strategies for the compound Atwood machine problem have been proposed. The present strategy is based on a very simple idea. Taking an Atwood machine and replacing one of its bodies by another Atwood machine, we have a compound machine. As this operation can be repeated, we can construct any compound Atwood machine. This rule of construction is transferred to a mathematical model, whereby the equations of motion are obtained. The only difference between the machine and its model is that instead of pulleys and bodies, we have reference frames that move solidarily with these objects. This model provides us with the accelerations in the non-inertial frames of the bodies, which we will use to obtain the equations of motion. This approach to the problem will be justified by the Lagrange method and exemplified by machines with six and eight bodies.
What's Next: Recruitment of a Grounded Predictive Body Model for Planning a Robot's Actions.
Schilling, Malte; Cruse, Holk
2012-01-01
Even comparatively simple, reactive systems are able to control complex motor tasks, such as hexapod walking on unpredictable substrate. The capability of such a controller can be improved by introducing internal models of the body and of parts of the environment. Such internal models can be applied as inverse models, as forward models or to solve the problem of sensor fusion. Usually, separate models are used for these functions. Furthermore, separate models are used to solve different tasks. Here we concentrate on internal models of the body as the brain considers its own body the most important part of the world. The model proposed is formed by a recurrent neural network with the property of pattern completion. The model shows a hierarchical structure but nonetheless comprises a holistic system. One and the same model can be used as a forward model, as an inverse model, for sensor fusion, and, with a simple expansion, as a model to internally simulate (new) behaviors to be used for prediction. The model embraces the geometrical constraints of a complex body with many redundant degrees of freedom, and allows finding geometrically possible solutions. To control behavior such as walking, climbing, or reaching, this body model is complemented by a number of simple reactive procedures together forming a procedural memory. In this article, we illustrate the functioning of this network. To this end we present examples for solutions of the forward function and the inverse function, and explain how the complete network might be used for predictive purposes. The model is assumed to be "innate," so learning the parameters of the model is not (yet) considered.
Pattern recognition analysis and classification modeling of selenium-producing areas
Naftz, D.L.
1996-01-01
Established chemometric and geochemical techniques were applied to water quality data from 23 National Irrigation Water Quality Program (NIWQP) study areas in the Western United States. These techniques were applied to the NIWQP data set to identify common geochemical processes responsible for mobilization of selenium and to develop a classification model that uses major-ion concentrations to identify areas that contain elevated selenium concentrations in water that could pose a hazard to water fowl. Pattern recognition modeling of the simple-salt data computed with the SNORM geochemical program indicate three principal components that explain 95% of the total variance. A three-dimensional plot of PC 1, 2 and 3 scores shows three distinct clusters that correspond to distinct hydrochemical facies denoted as facies 1, 2 and 3. Facies 1 samples are distinguished by water samples without the CaCO3 simple salt and elevated concentrations of NaCl, CaSO4, MgSO4 and Na2SO4 simple salts relative to water samples in facies 2 and 3. Water samples in facies 2 are distinguished from facies 1 by the absence of the MgSO4 simple salt and the presence of the CaCO3 simple salt. Water samples in facies 3 are similar to samples in facies 2, with the absence of both MgSO4 and CaSO4 simple salts. Water samples in facies 1 have the largest selenium concentration (10 ??gl-1), compared to a median concentration of 2.0 ??gl-1 and less than 1.0 ??gl-1 for samples in facies 2 and 3. A classification model using the soft independent modeling by class analogy (SIMCA) algorithm was constructed with data from the NIWQP study areas. The classification model was successful in identifying water samples with a selenium concentration that is hazardous to some species of water-fowl from a test data set comprised of 2,060 water samples from throughout Utah and Wyoming. Application of chemometric and geochemical techniques during data synthesis analysis of multivariate environmental databases from other national-scale environmental programs such as the NIWQP could also provide useful insights for addressing 'real world' environmental problems.
Modal cost analysis for simple continua
NASA Technical Reports Server (NTRS)
Hu, A.; Skelton, R. E.; Yang, T. Y.
1988-01-01
The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.
Zipper model for the melting of thin films
NASA Astrophysics Data System (ADS)
Abdullah, Mikrajuddin; Khairunnisa, Shafira; Akbar, Fathan
2016-01-01
We propose an alternative model to Lindemann’s criterion for melting that explains the melting of thin films on the basis of a molecular zipper-like mechanism. Using this model, a unique criterion for melting is obtained. We compared the results of the proposed model with experimental data of melting points and heat of fusion for many materials and obtained interesting results. The interesting thing reported here is how complex physics problems can sometimes be modeled with simple objects around us that seemed to have no correlation. This kind of approach is sometimes very important in physics education and should always be taught to undergraduate or graduate students.
Recent Developments on the Turbulence Modeling Resource Website (Invited)
NASA Technical Reports Server (NTRS)
Rumssey, Christopher L.
2015-01-01
The NASA Langley Turbulence Model Resource (TMR) website has been active for over five years. Its main goal of providing a one-stop, easily accessible internet site for up-to-date information on Reynolds-averaged Navier-Stokes turbulence models remains unchanged. In particular, the site strives to provide an easy way for users to verify their own implementations of widely-used turbulence models, and to compare the results from different models for a variety of simple unit problems covering a range of flow physics. Some new features have been recently added to the website. This paper documents the site's features, including recent developments, future plans, and open questions.
Image-Based Models for Specularity Propagation in Diminished Reality.
Said, Souheil Hadj; Tamaazousti, Mohamed; Bartoli, Adrien
2018-07-01
The aim of Diminished Reality (DR) is to remove a target object in a live video stream seamlessly. In our approach, the area of the target object is replaced with new texture that blends with the rest of the image. The result is then propagated to the next frames of the video. One of the important stages of this technique is to update the target region with respect to the illumination change. This is a complex and recurrent problem when the viewpoint changes. We show that the state-of-the-art in DR fails in solving this problem, even under simple scenarios. We then use local illumination models to address this problem. According to these models, the variation in illumination only affects the specular component of the image. In the context of DR, the problem is therefore solved by propagating the specularities in the target area. We list a set of structural properties of specularities which we incorporate in two new models for specularity propagation. Our first model includes the same property as the previous approaches, which is the smoothness of illumination variation, but has a different estimation method based on the Thin-Plate Spline. Our second model incorporates more properties of the specularity's shape on planar surfaces. Experimental results on synthetic and real data show that our strategy substantially improves the rendering quality compared to the state-of-the-art in DR.
NASA Astrophysics Data System (ADS)
Łatas, Waldemar
2018-01-01
The problem of vibrations of the beam with the attached system of translational and rotational dynamic mass dampers subjected to random excitations with peaked power spectral densities, is presented in the hereby paper. The Euler-Bernoulli beam model is applied, while for solving the equation of motion the Galerkin method and the Laplace time transform are used. The obtained transfer functions allow to determine power spectral densities of the beam deflection and other dependent variables. Numerical examples present simple optimization problems of mass dampers parameters for local and global objective functions.
van den Aarssen, Laura G; Bringmann, Torsten; Pfrommer, Christoph
2012-12-07
The cold dark matter paradigm describes the large-scale structure of the Universe remarkably well. However, there exists some tension with the observed abundances and internal density structures of both field dwarf galaxies and galactic satellites. Here, we demonstrate that a simple class of dark matter models may offer a viable solution to all of these problems simultaneously. Their key phenomenological properties are velocity-dependent self-interactions mediated by a light vector messenger and thermal production with much later kinetic decoupling than in the standard case.
Baryshnikov, F F
1995-10-20
The influence of angular aberration of radiation as a result of the difference in speed of a geostationary satellite and the speed of the Earth's surface on laser power beaming to satellites is considered. Angular aberration makes it impossible to direct the energy to the satellite, and additional beam rotation is necessary. Because the Earth's rotation may cause bad phase restoration, we face a serious problem: how to transfer incoherent radiation to remote satellites. In the framework of the Kolmogorov turbulence model simple conditions of energy transfer are derived and discussed.
NASA Astrophysics Data System (ADS)
Regel, L. L.; Vedernikov, A. A.; Queeckers, P.; Legros, J.-C.
1991-12-01
The problem of the separation of crystals from their feeding solutions and their conservation at the end of the crystallization under microgravity is investigated. The goal to be reached is to propose an efficient and simple system. This method has to be applicable for an automatic separation on board a spacecraft, without using a centrifuge. The injection of an immiscible and inert liquid into the cell is proposed to solve the problem. The results of numerical modeling, earth simulation tests and experiments under short durations of weightlessness (using aircraft parabolic flights) are described.
Unification of the complex Langevin method and the Lefschetzthimble method
NASA Astrophysics Data System (ADS)
Nishimura, Jun; Shimasaki, Shinji
2018-03-01
Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
A discontinuous Galerkin method for two-dimensional PDE models of Asian options
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.; Cvejnová, D.
2016-06-01
In our previous research we have focused on the problem of plain vanilla option valuation using discontinuous Galerkin method for numerical PDE solution. Here we extend a simple one-dimensional problem into two-dimensional one and design a scheme for valuation of Asian options, i.e. options with payoff depending on the average of prices collected over prespecified horizon. The algorithm is based on the approach combining the advantages of the finite element methods together with the piecewise polynomial generally discontinuous approximations. Finally, an illustrative example using DAX option market data is provided.
Continuation of periodic orbits in symmetric Hamiltonian and conservative systems
NASA Astrophysics Data System (ADS)
Galan-Vioque, J.; Almaraz, F. J. M.; Macías, E. F.
2014-12-01
We present and review results on the continuation and bifurcation of periodic solutions in conservative, reversible and Hamiltonian systems in the presence of symmetries. In particular we show how two-point boundary value problem continuation software can be used to compute families of periodic solutions of symmetric Hamiltonian systems. The technique is introduced with a very simple model example (the mathematical pendulum), justified with a theoretical continuation result and then applied to two non trivial examples: the non integrable spring pendulum and the continuation of the figure eight solution of the three body problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Peter W.; Ismail, Ahmed; /Stanford U., Phys. Dept.
We present a simple solution to the little hierarchy problem in the minimal supersymmetric standard model: a vectorlike fourth generation. With O(1) Yukawa couplings for the new quarks, the Higgs mass can naturally be above 114 GeV. Unlike a chiral fourth generation, a vectorlike generation can solve the little hierarchy problem while remaining consistent with precision electroweak and direct production constraints, and maintaining the success of the grand unified framework. The new quarks are predicted to lie between 300-600 GeV and will thus be discovered or ruled out at the LHC. This scenario suggests exploration of several novel collider signatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Peter W.; Ismail, Ahmed; Saraswat, Prashant
We present a simple solution to the little hierarchy problem in the minimal supersymmetric standard model: a vectorlike fourth generation. With O(1) Yukawa couplings for the new quarks, the Higgs mass can naturally be above 114 GeV. Unlike a chiral fourth generation, a vectorlike generation can solve the little hierarchy problem while remaining consistent with precision electroweak and direct production constraints, and maintaining the success of the grand unified framework. The new quarks are predicted to lie between {approx}300-600 GeV and will thus be discovered or ruled out at the LHC. This scenario suggests exploration of several novel collider signatures.
Actuator Placement Via Genetic Algorithm for Aircraft Morphing
NASA Technical Reports Server (NTRS)
Crossley, William A.; Cook, Andrea M.
2001-01-01
This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.
Motion and force control of multiple robotic manipulators
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz-Delgado, Kenneth
1992-01-01
This paper addresses the motion and force control problem of multiple robot arms manipulating a cooperatively held object. A general control paradigm is introduced which decouples the motion and force control problems. For motion control, different control strategies are constructed based on the variables used as the control input in the controller design. There are three natural choices; acceleration of a generalized coordinate, arm tip force vectors, and the joint torques. The first two choices require full model information but produce simple models for the control design problem. The last choice results in a class of relatively model independent control laws by exploiting the Hamiltonian structure of the open loop system. The motion control only determines the joint torque to within a manifold, due to the multiple-arm kinematic constraint. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, an optimization can be performed to best allocate the desired and effector control force to the joint actuators. The other possibility is to control the internal force about some set point. It is shown that effective force regulation can be achieved even if little model information is available.
Modeling the plant-soil interaction in presence of heavy metal pollution and acidity variations.
Guala, Sebastián; Vega, Flora A; Covelo, Emma F
2013-01-01
On a mathematical interaction model, developed to model metal uptake by plants and the effects on their growth, we introduce a modification which considers also effects on variations of acidity in soil. The model relates the dynamics of the uptake of metals from soil to plants and also variations of uptake according to the acidity level. Two types of relationships are considered: total and available metal content. We suppose simple mathematical assumptions in order to get as simple as possible expressions with the aim of being easily tested in experimental problems. This work introduces modifications to two versions of the model: on the one hand, the expression of the relationship between the metal in soil and the concentration of the metal in plants and, on the other hand, the relationship between the metal in the soil and total amount of the metal in plants. The fine difference of both versions is fundamental at the moment to consider the tolerance and capacity of accumulation of pollutants in the biomass from the soil.
Some anticipated contributions to core fluid dynamics from the GRM
NASA Technical Reports Server (NTRS)
Vanvorhies, C.
1985-01-01
It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.
Simple adaptive control for quadcopters with saturated actuators
NASA Astrophysics Data System (ADS)
Borisov, Oleg I.; Bobtsov, Alexey A.; Pyrkin, Anton A.; Gromov, Vladislav S.
2017-01-01
The stabilization problem for quadcopters with saturated actuators is considered. A simple adaptive output control approach is proposed. The control law "consecutive compensator" is augmented with the auxiliary integral loop and anti-windup scheme. Efficiency of the obtained regulator was confirmed by simulation of the quadcopter control problem.
A Simple Label Switching Algorithm for Semisupervised Structural SVMs.
Balamurugan, P; Shevade, Shirish; Sundararajan, S
2015-10-01
In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.
On a modification method of Lefschetz thimbles
NASA Astrophysics Data System (ADS)
Tsutsui, Shoichiro; Doi, Takahiro M.
2018-03-01
The QCD at finite density is not well understood yet, where standard Monte Carlo simulation suffers from the sign problem. In order to overcome the sign problem, the method of Lefschetz thimble has been explored. Basically, the original sign problem can be less severe in a complexified theory due to the constancy of the imaginary part of an action on each thimble. However, global phase factors assigned on each thimble still remain. Their interference is not negligible in a situation where a large number of thimbles contribute to the partition function, and this could also lead to a sign problem. In this study, we propose a method to resolve this problem by modifying the structure of Lefschetz thimbles such that only a single thimble is relevant to the partition function. It can be shown that observables measured in the original and modified theories are connected by a simple identity. We exemplify that our method works well in a toy model.
SEACAS Theory Manuals: Part 1. Problem Formulation in Nonlinear Solid Mechancis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.
1998-08-01
This report gives an introduction to the basic concepts and principles involved in the formulation of nonlinear problems in solid mechanics. By way of motivation, the discussion begins with a survey of some of the important sources of nonlinearity in solid mechanics applications, using wherever possible simple one dimensional idealizations to demonstrate the physical concepts. This discussion is then generalized by presenting generic statements of initial/boundary value problems in solid mechanics, using linear elasticity as a template and encompassing such ideas as strong and weak forms of boundary value problems, boundary and initial conditions, and dynamic and quasistatic idealizations. Themore » notational framework used for the linearized problem is then extended to account for finite deformation of possibly inelastic solids, providing the context for the descriptions of nonlinear continuum mechanics, constitutive modeling, and finite element technology given in three companion reports.« less
Coevolving memetic algorithms: a review and progress report.
Smith, Jim E
2007-02-01
Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.
Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.
2011-03-01
The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.
Xu, Lei; Jeavons, Peter
2015-11-01
Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Puri, Ishwar K.
2004-01-01
Our goal has been to investigate the influence of both dilution and radiation on the extinction process of nonpremixed flames at low strain rates. Simulations have been performed by using a counterflow code and three radiation models have been included in it, namely, the optically thin, the narrowband, and discrete ordinate models. The counterflow flame code OPPDIFF was modified to account for heat transfer losses by radiation from the hot gases. The discrete ordinate method (DOM) approximation was first suggested by Chandrasekhar for solving problems in interstellar atmospheres. Carlson and Lathrop developed the method for solving multi-dimensional problem in neutron transport. Only recently has the method received attention in the field of heat transfer. Due to the applicability of the discrete ordinate method for thermal radiation problems involving flames, the narrowband code RADCAL was modified to calculate the radiative properties of the gases. A non-premixed counterflow flame was simulated with the discrete ordinate method for radiative emissions. In comparison with two other models, it was found that the heat losses were comparable with the optically thin and simple narrowband model. The optically thin model had the highest heat losses followed by the DOM model and the narrow-band model.
Random close packing of polydisperse jammed emulsions
NASA Astrophysics Data System (ADS)
Brujic, Jasna
2010-03-01
Packing problems are everywhere, ranging from oil extraction through porous rocks to grain storage in silos and the compaction of pharmaceutical powders into tablets. At a given density, particulate systems pack into a mechanically stable and amorphous jammed state. Theoretical frameworks have proposed a connection between this jammed state and the glass transition, a thermodynamics of jamming, as well as geometric modeling of random packings. Nevertheless, a simple underlying mechanism for the random assembly of athermal particles, analogous to crystalline ordering, remains unknown. Here we use 3D measurements of polydisperse packings of emulsion droplets to build a simple statistical model in which the complexity of the global packing is distilled into a local stochastic process. From the perspective of a single particle the packing problem is reduced to the random formation of nearest neighbors, followed by a choice of contacts among them. The two key parameters in the model, the available space around a particle and the ratio of contacts to neighbors, are directly obtained from experiments. Remarkably, we demonstrate that this ``granocentric'' view captures the properties of the polydisperse emulsion packing, ranging from the microscopic distributions of nearest neighbors and contacts to local density fluctuations and all the way to the global packing density. Further applications to monodisperse and bidisperse systems quantitatively agree with previously measured trends in global density. This model therefore reveals a general principle of organization for random packing and lays the foundations for a theory of jammed matter.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Analysis of an operator-differential model for magnetostrictive energy harvesting
NASA Astrophysics Data System (ADS)
Davino, D.; Krejčí, P.; Pimenov, A.; Rachinskii, D.; Visone, C.
2016-10-01
We present a model of, and analysis of an optimization problem for, a magnetostrictive harvesting device which converts mechanical energy of the repetitive process such as vibrations of the smart material to electrical energy that is then supplied to an electric load. The model combines a lumped differential equation for a simple electronic circuit with an operator model for the complex constitutive law of the magnetostrictive material. The operator based on the formalism of the phenomenological Preisach model describes nonlinear saturation effects and hysteresis losses typical of magnetostrictive materials in a thermodynamically consistent fashion. We prove well-posedness of the full operator-differential system and establish global asymptotic stability of the periodic regime under periodic mechanical forcing that represents mechanical vibrations due to varying environmental conditions. Then we show the existence of an optimal solution for the problem of maximization of the output power with respect to a set of controllable parameters (for the periodically forced system). Analytical results are illustrated with numerical examples of an optimal solution.
Dynamic model of open shell structures buried in poroelastic soils
NASA Astrophysics Data System (ADS)
Bordón, J. D. R.; Aznárez, J. J.; Maeso, O.
2017-08-01
This paper is concerned with a three-dimensional time harmonic model of open shell structures buried in poroelastic soils. It combines the dual boundary element method (DBEM) for treating the soil and shell finite elements for modelling the structure, leading to a simple and efficient representation of buried open shell structures. A new fully regularised hypersingular boundary integral equation (HBIE) has been developed to this aim, which is then used to build the pair of dual BIEs necessary to formulate the DBEM for Biot poroelasticity. The new regularised HBIE is validated against a problem with analytical solution. The model is used in a wave diffraction problem in order to show its effectiveness. It offers excellent agreement for length to thickness ratios greater than 10, and relatively coarse meshes. The model is also applied to the calculation of impedances of bucket foundations. It is found that all impedances except the torsional one depend considerably on hydraulic conductivity within the typical frequency range of interest of offshore wind turbines.
The Effect of Laziness in Group Chase and Escape
NASA Astrophysics Data System (ADS)
Masuko, Makoto; Hiraoka, Takayuki; Ito, Nobuyasu; Shimada, Takashi
2017-08-01
The effect of laziness in the group chase and escape problem is studied using a simple model. Laziness is introduced as random walks in two ways: uniformly and in a "division of labor" way. It is shown that while the former is always ineffective, the latter can improve the efficiency of catching, through the formation of pincer attack configuration by diligent and lazy chasers.
Update on matter radii of O-2417
NASA Astrophysics Data System (ADS)
Fortune, H. T.
2018-05-01
The appearance of new theoretical papers concerning matter radii of neutron-rich oxygen nuclei has prompted a return to this problem. New results provide no better agreement with experimental values than did previous calculations with a simple model. I maintain that there is no reason to adjust the 22O core in the 24O nucleus, and the case of 24O should be reexamined experimentally.
NASA Technical Reports Server (NTRS)
Lebedeff, S. A.; Hameed, S.
1975-01-01
The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
ERIC Educational Resources Information Center
Kallunki, Veera
2013-01-01
Pupils' qualitative understanding of DC-circuit phenomena is reported to be weak. In numerous research reports lists of problems in understanding the functioning of simple DC-circuits have been presented. So-called mental model surveys have uncovered difficulties in different age groups, and in different phases of instruction. In this study, the…
Capillarity Guided Patterning of Microliquids.
Kang, Myeongwoo; Park, Woohyun; Na, Sangcheol; Paik, Sang-Min; Lee, Hyunjae; Park, Jae Woo; Kim, Ho-Young; Jeon, Noo Li
2015-06-01
Soft lithography and other techniques have been developed to investigate biological and chemical phenomena as an alternative to photolithography-based patterning methods that have compatibility problems. Here, a simple approach for nonlithographic patterning of liquids and gels inside microchannels is described. Using a design that incorporates strategically placed microstructures inside the channel, microliquids or gels can be spontaneously trapped and patterned when the channel is drained. The ability to form microscale patterns inside microfluidic channels using simple fluid drain motion offers many advantages. This method is geometrically analyzed based on hydrodynamics and verified with simulation and experiments. Various materials (i.e., water, hydrogels, and other liquids) are successfully patterned with complex shapes that are isolated from each other. Multiple cell types are patterned within the gels. Capillarity guided patterning (CGP) is fast, simple, and robust. It is not limited by pattern shape, size, cell type, and material. In a simple three-step process, a 3D cancer model that mimics cell-cell and cell-extracellular matrix interactions is engineered. The simplicity and robustness of the CGP will be attractive for developing novel in vitro models of organ-on-a-chip and other biological experimental platforms amenable to long-term observation of dynamic events using advanced imaging and analytical techniques. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
GRIPs (Group Investigation Problems) for Introductory Physics
NASA Astrophysics Data System (ADS)
Moore, Thomas A.
2006-12-01
GRIPs lie somewhere between homework problems and simple labs: they are open-ended questions that require a mixture of problem-solving skills and hands-on experimentation to solve practical puzzles involving simple physical objects. In this talk, I will describe three GRIPs that I developed for a first-semester introductory calculus-based physics course based on the "Six Ideas That Shaped Physics" text. I will discuss the design of the three GRIPs we used this past fall, our experience in working with students on these problems, and students' response as reported on course evaluations.
Stability and Interaction of Coherent Structure in Supersonic Reactive Wakes
NASA Technical Reports Server (NTRS)
Menon, Suresh
1983-01-01
A theoretical formulation and analysis is presented for a study of the stability and interaction of coherent structure in reacting free shear layers. The physical problem under investigation is a premixed hydrogen-oxygen reacting shear layer in the wake of a thin flat plate. The coherent structure is modeled as a periodic disturbance and its stability is determined by the application of linearized hydrodynamic stability theory which results in a generalized eigenvalue problem for reactive flows. Detailed stability analysis of the reactive wake for neutral, symmetrical and antisymmetrical disturbance is presented. Reactive stability criteria is shown to be quite different from classical non-reactive stability. The interaction between the mean flow, coherent structure and fine-scale turbulence is theoretically formulated using the von-Kaman integral technique. Both time-averaging and conditional phase averaging are necessary to separate the three types of motion. The resulting integro-differential equations can then be solved subject to initial conditions with appropriate shape functions. In the laminar flow transition region of interest, the spatial interaction between the mean motion and coherent structure is calculated for both non-reactive and reactive conditions and compared with experimental data wherever available. The fine-scale turbulent motion determined by the application of integral analysis to the fluctuation equations. Since at present this turbulence model is still untested, turbulence is modeled in the interaction problem by a simple algebraic eddy viscosity model. The applicability of the integral turbulence model formulated here is studied parametrically by integrating these equations for the simple case of self-similar mean motion with assumed shape functions. The effect of the motion of the coherent structure is studied and very good agreement is obtained with previous experimental and theoretical works for non-reactive flow. For the reactive case, lack of experimental data made direct comparison difficult. It was determined that the growth rate of the disturbance amplitude is lower for reactive case. The results indicate that the reactive flow stability is in qualitative agreement with experimental observation.
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
NASA Astrophysics Data System (ADS)
BOERTJENS, G. J.; VAN HORSSEN, W. T.
2000-08-01
In this paper an initial-boundary value problem for the vertical displacement of a weakly non-linear elastic beam with an harmonic excitation in the horizontal direction at the ends of the beam is studied. The initial-boundary value problem can be regarded as a simple model describing oscillations of flexible structures like suspension bridges or iced overhead transmission lines. Using a two-time-scales perturbation method an approximation of the solution of the initial-boundary value problem is constructed. Interactions between different oscillation modes of the beam are studied. It is shown that for certain external excitations, depending on the phase of an oscillation mode, the amplitude of specific oscillation modes changes.
Assimilation of satellite color observations in a coupled ocean GCM-ecosystem model
NASA Technical Reports Server (NTRS)
Sarmiento, Jorge L.
1992-01-01
Monthly average coastal zone color scanner (CZCS) estimates of chlorophyll concentration were assimilated into an ocean global circulation model(GCM) containing a simple model of the pelagic ecosystem. The assimilation was performed in the simplest possible manner, to allow the assessment of whether there were major problems with the ecosystem model or with the assimilation procedure. The current ecosystem model performed well in some regions, but failed in others to assimilate chlorophyll estimates without disrupting important ecosystem properties. This experiment gave insight into those properties of the ecosystem model that must be changed to allow data assimilation to be generally successful, while raising other important issues about the assimilation procedure.
Hypo-Elastic Model for Lung Parenchyma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Alan D.; Einstein, Daniel R.
2012-03-01
A simple elastic isotropic constitutive model for the spongy tissue in lung is derived from the theory of hypoelasticity. The model is shown to exhibit a pressure dependent behavior that has been interpreted by some as indicating extensional anisotropy. In contrast, we show that this behavior arises natural from an analysis of isotropic hypoelastic invariants, and is a likely result of non-linearity, not anisotropy. The response of the model is determined analytically for several boundary value problems used for material characterization. These responses give insight into both the material behavior as well as admissible bounds on parameters. The model ismore » characterized against published experimental data for dog lung. Future work includes non-elastic model behavior.« less
Quantum teleportation of nonclassical wave packets: An effective multimode theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki
2011-07-15
We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.
Optical depth in particle-laden turbulent flows
NASA Astrophysics Data System (ADS)
Frankel, A.; Iaccarino, G.; Mani, A.
2017-11-01
Turbulent clustering of particles causes an increase in the radiation transmission through gas-particle mixtures. Attempts to capture the ensemble-averaged transmission lead to a closure problem called the turbulence-radiation interaction. A simple closure model based on the particle radial distribution function is proposed to capture the effect of turbulent fluctuations in the concentration on radiation intensity. The model is validated against a set of particle-resolved ray tracing experiments through particle fields from direct numerical simulations of particle-laden turbulence. The form of the closure model is generalizable to arbitrary stochastic media with known two-point correlation functions.
On Global Optimal Sailplane Flight Strategy
NASA Technical Reports Server (NTRS)
Sander, G. J.; Litt, F. X.
1979-01-01
The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.
Effects of land use on lake nutrients: The importance of scale, hydrologic connectivity, and region
Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate
2015-01-01
Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales.
Effects of Land Use on Lake Nutrients: The Importance of Scale, Hydrologic Connectivity, and Region
Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate
2015-01-01
Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales. PMID:26267813
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.
Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L
2014-05-20
Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.
NASA Astrophysics Data System (ADS)
Colombant, Denis; Manheimer, Wallace; Busquet, Michel
2004-11-01
A simple steady-state model using flux-limiters by Day et al [1] showed that temperature profiles could formally be double-valued. Stability of temperature profiles in laser-driven temperature fronts using delocalization models was also discussed by Prasad and Kershaw [2]. We have observed steepening of the front and flattening of the maximum temperature in laser-driven implosions [3]. Following the simple model first proposed in [1], we solve for a two-boundary value steady-state heat flow problem for various non-local heat transport models. For the more complicated models [4,5], we obtain the steady-state solution as the asymptotic limit of the time-dependent solution. Solutions will be shown and compared for these various models. 1.M.Day, B.Merriman, F.Najmabadi and R.W.Conn, Contrib. Plasma Phys. 36, 419 (1996) 2.M.K.Prasad and D.S.Kershaw, Phys. Fluids B3, 3087 (1991) 3.D.Colombant, W.Manheimer and M.Busquet, Bull. Amer. Phys. Soc. 48, 326 (2003) 4.E.M.Epperlein and R.W.Short, Phys. Fluids B3, 3092 (1991) 5.W.Manheimer and D.Colombant, Phys. Plasmas 11, 260 (2004)
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
NASA Technical Reports Server (NTRS)
Baskharone, Erian A.
1993-01-01
This report describes the computational steps involved in executing a finite-element-based perturbation model for computing the rotor dynamic coefficients of a shrouded pump impeller or a simple seal. These arise from the fluid/rotor interaction in the clearance gap. In addition to the sample cases, the computational procedure also applies to a separate category of problems referred to as the 'seal-like' category. The problem, in this case, concerns a shrouded impeller, with the exception that the secondary, or leakage, passage is totally isolated from the primary-flow passage. The difference between this and the pump problem is that the former is analytically of the simple 'seal-like' configuration, with two (inlet and exit) flow-permeable stations, while the latter constitutes a double-entry / double-discharge flow problem. In all cases, the problem is that of a rotor clearance gap. The problem here is that of a rotor excitation in the form of a cylindrical whirl around the housing centerline for a smooth annular seal. In its centered operation mode, the rotor is assumed to give rise to an axisymmetric flow field in the clearance gap. As a result, problems involving longitudinal or helical grooves, in the rotor or housing surfaces, go beyond the code capabilities. Discarding, for the moment, the pre- and post-processing phases, the bulk of the computational procedure consists of two main steps. The first is aimed at producing the axisymmetric 'zeroth-order' flow solution in the given flow domain. Detailed description of this problem, including the flow-governing equations, turbulence closure, boundary conditions, and the finite-element formulation, was covered by Baskharone and Hensel. The second main step is where the perturbation model is implemented, with the input being the centered-rotor 'zeroth-order' flow solution and a prescribed whirl frequency ratio (whirl frequency divided by the impeller speed). The computational domain, in the latter case, is treated as three dimensional, with the number of computational planes in the circumferential direction being specified a priori. The reader is reminded that the deformations in the finite elements are all infinitesimally small because the rotor eccentricity itself is a virtual displacement. This explains why we have generically termed the perturbation model the 'virtually' deformable finite-element category. The primary outcome of implementing the perturbation model is the tangential and radial components, F(sub theta)(sup *) and F(sub r)(sup *) of the fluid-exerted force on the rotor surface due to the whirling motion. Repetitive execution of the perturbation model subprogram over a sufficient range of whirl frequency ratios, and subsequent interpolation of these fluid forces, using the least-square method, finally enable the user to compute the impeller rotor dynamic coefficients of the fluid/rotor interaction. These are the direct and cross-coupled stiffness, damping, and inertia effects of the fluid/rotor interaction.
Vacancies in epitaxial graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davydov, S. Yu., E-mail: Sergei-Davydov@mail.ru
The coherent-potential method is used to consider the problem of the influence of a finite concentration of randomly arranged vacancies on the density of states of epitaxial graphene. To describe the density of states of the substrate, simple models (the Anderson model, Haldane-Anderson model, and parabolic model) are used. The electronic spectrum of free single-sheet graphene is considered in the low-energy approximation. Charge transfer in the graphene-substrate system is discussed. It is shown that, in all cases, the density of states of epitaxial graphene decreases proportionally to the vacancy concentration. At the same time, the average charge transferred from graphenemore » to the substrate increases.« less
An optimal control model approach to the design of compensators for simulator delay
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Caglayan, A.
1982-01-01
The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.
Coherent states, quantum gravity, and the Born-Oppenheimer approximation. I. General considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stottmeister, Alexander, E-mail: alexander.stottmeister@gravity.fau.de; Thiemann, Thomas, E-mail: thomas.thiemann@gravity.fau.de
2016-06-15
This article, as the first of three, aims at establishing the (time-dependent) Born-Oppenheimer approximation, in the sense of space adiabatic perturbation theory, for quantum systems constructed by techniques of the loop quantum gravity framework, especially the canonical formulation of the latter. The analysis presented here fits into a rather general framework and offers a solution to the problem of applying the usual Born-Oppenheimer ansatz for molecular (or structurally analogous) systems to more general quantum systems (e.g., spin-orbit models) by means of space adiabatic perturbation theory. The proposed solution is applied to a simple, finite dimensional model of interacting spin systems,more » which serves as a non-trivial, minimal model of the aforesaid problem. Furthermore, it is explained how the content of this article and its companion affect the possible extraction of quantum field theory on curved spacetime from loop quantum gravity (including matter fields).« less
A computer program to trace seismic ray distribution in complex two-dimensional geological models
Yacoub, Nazieh K.; Scott, James H.
1970-01-01
A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.
The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem
NASA Astrophysics Data System (ADS)
Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron
2015-03-01
In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.
Large-eddy simulation of a boundary layer with concave streamwise curvature
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1994-01-01
Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
Seasonally forced disease dynamics explored as switching between attractors
NASA Astrophysics Data System (ADS)
Keeling, Matt J.; Rohani, Pejman; Grenfell, Bryan T.
2001-01-01
Biological phenomena offer a rich diversity of problems that can be understood using mathematical techniques. Three key features common to many biological systems are temporal forcing, stochasticity and nonlinearity. Here, using simple disease models compared to data, we examine how these three factors interact to produce a range of complicated dynamics. The study of disease dynamics has been amongst the most theoretically developed areas of mathematical biology; simple models have been highly successful in explaining the dynamics of a wide variety of diseases. Models of childhood diseases incorporate seasonal variation in contact rates due to the increased mixing during school terms compared to school holidays. This ‘binary’ nature of the seasonal forcing results in dynamics that can be explained as switching between two nonlinear spiral sinks. Finally, we consider the stability of the attractors to understand the interaction between the deterministic dynamics and demographic and environmental stochasticity. Throughout attention is focused on the behaviour of measles, whooping cough and rubella.
Universal resilience patterns in cascading load model: More capacity is not always better
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Wang, Xue; Cai, Lin; Ni, Chengzhang; Xie, Wei; Xu, Bo
We study the problem of universal resilience patterns in complex networks against cascading failures. We revise the classical betweenness method and overcome its limitation of quantifying the load in cascading model. Considering that the generated load by all nodes should be equal to the transported one by all edges in the whole network, we propose a new method to quantify the load on an edge and construct a simple cascading model. By attacking the edge with the highest load, we show that, if the flow between two nodes is transported along the shortest paths between them, then the resilience of some networks against cascading failures inversely decreases with the enhancement of the capacity of every edge, i.e. the more capacity is not always better. We also observe the abnormal fluctuation of the additional load that exceeds the capacity of each edge. By a simple graph, we analyze the propagation of cascading failures step by step, and give a reasonable explanation of the abnormal fluctuation of cascading dynamics.
MPPhys—A many-particle simulation package for computational physics education
NASA Astrophysics Data System (ADS)
Müller, Thomas
2014-03-01
In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent
A Simple Acronym for Doing Calculus: CAL
ERIC Educational Resources Information Center
Hathaway, Richard J.
2008-01-01
An acronym is presented that provides students a potentially useful, unifying view of the major topics covered in an elementary calculus sequence. The acronym (CAL) is based on viewing the calculus procedure for solving a calculus problem P* in three steps: (1) recognizing that the problem cannot be solved using simple (non-calculus) techniques;…
Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm
1978-09-01
deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research
NASA Astrophysics Data System (ADS)
Robbin, J. M.
2007-07-01
he hallmark of a good book of problems is that it allows you to become acquainted with an unfamiliar topic quickly and efficiently. The Quantum Mechanics Solver fits this description admirably. The book contains 27 problems based mainly on recent experimental developments, including neutrino oscillations, tests of Bell's inequality, Bose Einstein condensates, and laser cooling and trapping of atoms, to name a few. Unlike many collections, in which problems are designed around a particular mathematical method, here each problem is devoted to a small group of phenomena or experiments. Most problems contain experimental data from the literature, and readers are asked to estimate parameters from the data, or compare theory to experiment, or both. Standard techniques (e.g., degenerate perturbation theory, addition of angular momentum, asymptotics of special functions) are introduced only as they are needed. The style is closer to a non-specialist seminar rather than an undergraduate lecture. The physical models are kept simple; the emphasis is on cultivating conceptual and qualitative understanding (although in many of the problems, the simple models fit the data quite well). Some less familiar theoretical techniques are introduced, e.g. a variational method for lower (not upper) bounds on ground-state energies for many-body systems with two-body interactions, which is then used to derive a surprisingly accurate relation between baryon and meson masses. The exposition is succinct but clear; the solutions can be read as worked examples if you don't want to do the problems yourself. Many problems have additional discussion on limitations and extensions of the theory, or further applications outside physics (e.g., the accuracy of GPS positioning in connection with atomic clocks; proton and ion tumor therapies in connection with the Bethe Bloch formula for charged particles in solids). The problems use mainly non-relativistic quantum mechanics and are organised into three sections: Elementary Particles, Nuclei and Atoms; Quantum Entanglement and Measurement; and Complex Systems. The coverage is not comprehensive; there is little on scattering theory, for example, and some areas of recent interest, such as topological aspects of quantum mechanics and semiclassics, are not included. The problems are based on examination questions given at the École Polytechnique in the last 15 years. The book is accessible to undergraduates, but working physicists should find it a delight.
Science modelling in pre-calculus: how to make mathematics problems contextually meaningful
NASA Astrophysics Data System (ADS)
Sokolowski, Andrzej; Yalvac, Bugrahan; Loving, Cathleen
2011-04-01
'Use of mathematical representations to model and interpret physical phenomena and solve problems is one of the major teaching objectives in high school math curriculum' (National Council of Teachers of Mathematics (NCTM), Principles and Standards for School Mathematics, NCTM, Reston, VA, 2000). Commonly used pre-calculus textbooks provide a wide range of application problems. However, these problems focus students' attention on evaluating or solving pre-arranged formulas for given values. The role of scientific content is reduced to provide a background for these problems instead of being sources of data gathering for inducing mathematical tools. Students are neither required to construct mathematical models based on the contexts nor are they asked to validate or discuss the limitations of applied formulas. Using these contexts, the instructor may think that he/she is teaching problem solving, where in reality he/she is teaching algorithms of the mathematical operations (G. Kulm (ed.), New directions for mathematics assessment, in Assessing Higher Order Thinking in Mathematics, Erlbaum, Hillsdale, NJ, 1994, pp. 221-240). Without a thorough representation of the physical phenomena and the mathematical modelling processes undertaken, problem solving unintentionally appears as simple algorithmic operations. In this article, we deconstruct the representations of mathematics problems from selected pre-calculus textbooks and explicate their limitations. We argue that the structure and content of those problems limits students' coherent understanding of mathematical modelling, and this could result in weak student problem-solving skills. Simultaneously, we explore the ways to enhance representations of those mathematical problems, which we have characterized as lacking a meaningful physical context and limiting coherent student understanding. In light of our discussion, we recommend an alternative to strengthen the process of teaching mathematical modelling - utilization of computer-based science simulations. Although there are several exceptional computer-based science simulations designed for mathematics classes (see, e.g. Kinetic Book (http://www.kineticbooks.com/) or Gizmos (http://www.explorelearning.com/)), we concentrate mainly on the PhET Interactive Simulations developed at the University of Colorado at Boulder (http://phet.colorado.edu/) in generating our argument that computer simulations more accurately represent the contextual characteristics of scientific phenomena than their textual descriptions.
Multiscale Modeling of Mesoscale and Interfacial Phenomena
NASA Astrophysics Data System (ADS)
Petsev, Nikolai Dimitrov
With rapidly emerging technologies that feature interfaces modified at the nanoscale, traditional macroscopic models are pushed to their limits to explain phenomena where molecular processes can play a key role. Often, such problems appear to defy explanation when treated with coarse-grained continuum models alone, yet remain prohibitively expensive from a molecular simulation perspective. A prominent example is surface nanobubbles: nanoscopic gaseous domains typically found on hydrophobic surfaces that have puzzled researchers for over two decades due to their unusually long lifetimes. We show how an entirely macroscopic, non-equilibrium model explains many of their anomalous properties, including their stability and abnormally small gas-side contact angles. From this purely transport perspective, we investigate how factors such as temperature and saturation affect nanobubbles, providing numerous experimentally testable predictions. However, recent work also emphasizes the relevance of molecular-scale phenomena that cannot be described in terms of bulk phases or pristine interfaces. This is true for nanobubbles as well, whose nanoscale heights may require molecular detail to capture the relevant physics, in particular near the bubble three-phase contact line. Therefore, there is a clear need for general ways to link molecular granularity and behavior with large-scale continuum models in the treatment of many interfacial problems. In light of this, we have developed a general set of simulation strategies that couple mesoscale particle-based continuum models to molecular regions simulated through conventional molecular dynamics (MD). In addition, we derived a transport model for binary mixtures that opens the possibility for a wide range of applications in biological and drug delivery problems, and is readily reconciled with our hybrid MD-continuum techniques. Approaches that couple multiple length scales for fluid mixtures are largely absent in the literature, and we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.
NASA Astrophysics Data System (ADS)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
Low frequency full waveform seismic inversion within a tree based Bayesian framework
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe
2018-01-01
Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.
Rouleau, Pascal; Guertin, Pierre A
2013-01-01
Most animal models of contused, compressed or transected spinal cord injury (SCI) require a laminectomy to be performed. However, despite advantages and disadvantages associated with each of these models, the laminectomy itself is generally associated with significant problems including longer surgery and anaesthesia (related post-operative complications), neuropathic pain, spinal instabilities, deformities, lordosis, and biomechanical problems, etc. This review provides an overview of findings obtained mainly from our laboratory that are associated with the development and characterization of a novel murine model of spinal cord transection that does not require a laminectomy. A number of studies successfully conducted with this model provided strong evidence that it constitutes a simple, reliable and reproducible transection model of complete paraplegia which is particularly useful for studies on large cohorts of wild-type or mutant animals - e.g., drug screening studies in vivo or studies aimed at characterizing neuronal and non-neuronal adaptive changes post-trauma. It is highly suitable also for studies aimed at identifying and developing new pharmacological treatments against aging associated comorbid problems and specific SCI-related dysfunctions (e.g., stereotyped motor behaviours such as locomotion, sexual response, defecation and micturition) largely related with 'command centers' located in lumbosacral areas of the spinal cord.
Fractional cable model for signal conduction in spiny neuronal dendrites
NASA Astrophysics Data System (ADS)
Vitali, Silvia; Mainardi, Francesco
2017-06-01
The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.
Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less
A remote sensing based vegetation classification logic for global land cover analysis
Running, Steven W.; Loveland, Thomas R.; Pierce, Lars L.; Nemani, R.R.; Hunt, E. Raymond
1995-01-01
This article proposes a simple new logic for classifying global vegetation. The critical features of this classification are that 1) it is based on simple, observable, unambiguous characteristics of vegetation structure that are important to ecosystem biogeochemistry and can be measured in the field for validation, 2) the structural characteristics are remotely sensible so that repeatable and efficient global reclassifications of existing vegetation will be possible, and 3) the defined vegetation classes directly translate into the biophysical parameters of interest by global climate and biogeochemical models. A first test of this logic for the continental United States is presented based on an existing 1 km AVHRR normalized difference vegetation index database. Procedures for solving critical remote sensing problems needed to implement the classification are discussed. Also, some inferences from this classification to advanced vegetation biophysical variables such as specific leaf area and photosynthetic capacity useful to global biogeochemical modeling are suggested.
From the Nano- to the Macroscale - Bridging Scales for the Moving Contact Line Problem
NASA Astrophysics Data System (ADS)
Nold, Andreas; Sibley, David; Goddard, Benjamin; Kalliadasis, Serafim; Complex Multiscale Systems Team
2016-11-01
The moving contact line problem remains an unsolved fundamental problem in fluid mechanics. At the heart of the problem is its multiscale nature: a nanoscale region close to the solid boundary where the continuum hypothesis breaks down, must be resolved before effective macroscale parameters such as contact line friction and slip can be obtained. To capture nanoscale properties very close to the contact line and to establish a link to the macroscale behaviour, we employ classical density-functional theory (DFT), in combination with extended Navier-Stokes-like equations. Using simple models for viscosity and slip at the wall, we compare our computations with the Molecular Kinetic Theory, by extracting the contact line friction, depending on the imposed temperature of the fluid. A key fluid property captured by DFT is the fluid layering at the wall-fluid interface, which has a large effect on the shearing properties of a fluid. To capture this crucial property, we propose an anisotropic model for the viscosity, which also allows us to scrutinize the effect of fluid layering on contact line friction.
A novel neural network for variational inequalities with linear and nonlinear constraints.
Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun
2005-11-01
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.
A deterministic Lagrangian particle separation-based method for advective-diffusion problems
NASA Astrophysics Data System (ADS)
Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.
2008-12-01
A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.
NASA Astrophysics Data System (ADS)
Tanaka, H. L.
2003-06-01
In this study, a numerical simulation of the Arctic Oscillation (AO) is conducted using a simple barotropic model that considers the barotropic-baroclinic interactions as the external forcing. The model is referred to as a barotropic S model since the external forcing is obtained statistically from the long-term historical data, solving an inverse problem. The barotropic S model has been integrated for 51 years under a perpetual January condition and the dominant empirical orthogonal function (EOF) modes in the model have been analyzed. The results are compared with the EOF analysis of the barotropic component of the real atmosphere based on the daily NCEP-NCAR reanalysis for 50 yr from 1950 to 1999.According to the result, the first EOF of the model atmosphere appears to be the AO similar to the observation. The annular structure of the AO and the two centers of action at Pacific and Atlantic are simulated nicely by the barotropic S model. Therefore, the atmospheric low-frequency variabilities have been captured satisfactorily even by the simple barotropic model.The EOF analysis is further conducted to the external forcing of the barotropic S model. The structure of the dominant forcing shows the characteristics of synoptic-scale disturbances of zonal wavenumber 6 along the Pacific storm track. The forcing is induced by the barotropic-baroclinic interactions associated with baroclinic instability.The result suggests that the AO can be understood as the natural variability of the barotropic component of the atmosphere induced by the inherent barotropic dynamics, which is forced by the barotropic-baroclinic interactions. The fluctuating upscale energy cascade from planetary waves and synoptic disturbances to the zonal motion plays the key role for the excitation of the AO.
Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston
2016-10-28
Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.
Arheiam, Arheiam; Brown, Stephen L; Higham, Susan M; Albadri, Sondos; Harris, Rebecca V
2016-12-01
Diet diaries are recommended for dentists to monitor children's sugar consumption. Diaries provide multifaceted dietary information, but patients respond better to simpler advice. We explore how dentists integrate information from diet diaries to deliver useable advice to patients. As part of a questionnaire study of general dental practitioners (GDPs) in Northwest England, we asked dentists to specify the advice they would give a hypothetical patient based upon a diet diary case vignette. A sequential mixed method approach was used for data analysis: an initial inductive content analysis (ICA) to develop coding system to capture the complexity of dietary assessment and delivered advice. Using these codes, a quantitative analysis was conducted to examine correspondences between identified dietary problems and advice given. From these correspondences, we inferred how dentists reduced problems to give simple advice. A total of 229 dentists' responses were analysed. ICA on 40 questionnaires identified two distinctive approaches of developing diet advice: a summative (summary of issues into an all-encompassing message) and a selective approach (selection of a main message approach). In the quantitative analysis of all responses, raw frequencies indicated that dentists saw more problems than they advised on and provided highly specific advice on a restricted number of problems (e.g. not eating sugars before bedtime 50.7% or harmful items 42.4%, rather than simply reducing the amount of sugar 9.2%). Binary logistic regression models indicate that dentists provided specific advice that was tailored to the key problems that they identified. Dentists provided specific recommendations to address what they felt were key problems, whilst not intervening to address other problems that they may have felt less pressing. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Reinforcement Learning in a Nonstationary Environment: The El Farol Problem
NASA Technical Reports Server (NTRS)
Bell, Ann Maria
1999-01-01
This paper examines the performance of simple learning rules in a complex adaptive system based on a coordination problem modeled on the El Farol problem. The key features of the El Farol problem are that it typically involves a medium number of agents and that agents' pay-off functions have a discontinuous response to increased congestion. First we consider a single adaptive agent facing a stationary environment. We demonstrate that the simple learning rules proposed by Roth and Er'ev can be extremely sensitive to small changes in the initial conditions and that events early in a simulation can affect the performance of the rule over a relatively long time horizon. In contrast, a reinforcement learning rule based on standard practice in the computer science literature converges rapidly and robustly. The situation is reversed when multiple adaptive agents interact: the RE algorithms often converge rapidly to a stable average aggregate attendance despite the slow and erratic behavior of individual learners, while the CS based learners frequently over-attend in the early and intermediate terms. The symmetric mixed strategy equilibria is unstable: all three learning rules ultimately tend towards pure strategies or stabilize in the medium term at non-equilibrium probabilities of attendance. The brittleness of the algorithms in different contexts emphasize the importance of thorough and thoughtful examination of simulation-based results.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Levels of detail analysis of microwave scattering from human head models for brain stroke detection
2017-01-01
In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115
Excess Claims and Data Trimming in the Context of Credibility Rating Procedures,
1981-11-01
Triining in the Context of Credibility Rating Procedures by Hans BShlmann, Alois Gisler, William S. Jewell* 1. Motivation In Ratemaking and in Experience...work on the ETH computer. __.1: " Zen * ’ ’ II / -2- 2. The Basic Model Throughout the paper we work with the most simple model in the credibility...additional structure are summed up by stating that the density -3- f 8 (x) has the following form 1) fe(x) -(1-r)po (x/e) + rape(x) 3. The Basic Problem As
GFSSP Training Course Lectures
NASA Technical Reports Server (NTRS)
Majumdar, Alok K.
2008-01-01
GFSSP has been extended to model conjugate heat transfer Fluid Solid Network Elements include: a) Fluid nodes and Flow Branches; b) Solid Nodes and Ambient Nodes; c) Conductors connecting Fluid-Solid, Solid-Solid and Solid-Ambient Nodes. Heat Conduction Equations are solved simultaneously with Fluid Conservation Equations for Mass, Momentum, Energy and Equation of State. The extended code was verified by comparing with analytical solution for simple conduction-convection problem The code was applied to model: a) Pressurization of Cryogenic Tank; b) Freezing and Thawing of Metal; c) Chilldown of Cryogenic Transfer Line; d) Boil-off from Cryogenic Tank.
Multi-hole pressure probes to wind tunnel experiments and air data systems
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Shmakov, A. S.
2017-10-01
The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.
Using NASTRAN to model missile inertia loads
NASA Technical Reports Server (NTRS)
Marvin, R.; Porter, C.
1985-01-01
An important use of NASTRAN is in the area of structural loads analysis on weapon systems carried aboard aircraft. The program is used to predict bending moments and shears in missile bodies, when subjected to aircraft induced accelerations. The missile, launcher and aircraft wing are idealized, using rod and beam type elements for solution economy. Using the inertia relief capability of NASTRAN, the model is subjected to various acceleration combinations. It is found to be difficult to model the launcher sway braces and hooks which transmit compression only or tension only type forces respectively. A simple, iterative process was developed to overcome this modeling difficulty. A proposed code modification would help model compression or tension only contact type problems.
Numerical Solution of the Extended Nernst-Planck Model.
Samson; Marchand
1999-07-01
The main features of a numerical model aiming at predicting the drift of ions in an electrolytic solution upon a chemical potential gradient are presented. The mechanisms of ionic diffusion are described by solving the extended Nernst-Planck system of equations. The electrical coupling between the various ionic fluxes is accounted for by the Poisson equation. Furthermore, chemical activity effects are considered in the model. The whole system of nonlinear equations is solved using the finite-element method. Results yielded by the model for simple test cases are compared to those obtained using an analytical solution. Applications of the model to more complex problems are also presented and discussed. Copyright 1999 Academic Press.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Formal language constrained path problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, C.; Jacob, R.; Marathe, M.
1997-07-08
In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
A finite-element model for moving contact line problems in immiscible two-phase flow
NASA Astrophysics Data System (ADS)
Kucala, Alec
2017-11-01
Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). The macroscale movement of the contact line is dependent on the molecular interactions occurring at the three-phase interface, however most MCL problems require resolution at the meso- and macro-scale. A phenomenological model must be developed to account for the microscale interactions, as resolving both the macro- and micro-scale would render most problems computationally intractable. Here, a model for the moving contact line is presented as a weak forcing term in the Navier-Stokes equation and applied directly at the location of the three-phase interface point. The moving interface is tracked with the level set method and discretized using the conformal decomposition finite element method (CDFEM), allowing for the surface tension and the wetting model to be computed at the exact interface location. A variety of verification test cases for simple two- and three-dimensional geometries are presented to validate the current MCL model, which can exhibit grid independence when a proper scaling for the slip length is chosen. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Heuristics and Biases in Military Decision Making
2010-10-01
rationality and is based on a linear, step-based model that generates a specific course of action and is useful for the examination of problems that...exhibit stability and are underpinned by assumptions of “technical- rationality .”5 The Army values MDMP as the sanctioned approach for solving...theory) which sought to describe human behavior as a rational maximization of cost-benefit decisions, Kahne- man and Tversky provided a simple
ERIC Educational Resources Information Center
DeVane, Benjamin
2017-01-01
In this review article, I argue that games are complementary, not self-supporting, learning tools for democratic education because they can: (a) offer "simplified, but often not simple, outlines" (later called "models") of complex social systems that generate further inquiry; (b) provide "practice spaces" for…
How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach
2009-12-01
Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models
Grün, Sonja; Helias, Moritz
2017-01-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. PMID:28968396
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.
Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz
2017-10-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.
Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)
NASA Astrophysics Data System (ADS)
Kasibhatla, P.
2004-12-01
In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.
2014-01-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A
2014-06-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.
Turning Points of the Spherical Pendulum and the Golden Ratio
ERIC Educational Resources Information Center
Essen, Hanno; Apazidis, Nicholas
2009-01-01
We study the turning point problem of a spherical pendulum. The special cases of the simple pendulum and the conical pendulum are noted. For simple initial conditions the solution to this problem involves the golden ratio, also called the golden section, or the golden number. This number often appears in mathematics where you least expect it. To…
ERIC Educational Resources Information Center
Entrikin, Jerry; Griffiths, David
1983-01-01
The main problem in constructing functioning electric motors from simple parts is the mounting of the axle (which is too flimsy to maintain good electrical contacts or too tight, imposing excessive friction at the supports). This problem is solved by using a pencil sharpened at both ends as the axle. (JN)
Special Relativity as a Simple Geometry Problem
ERIC Educational Resources Information Center
de Abreu, Rodrigo; Guerra, Vasco
2009-01-01
The null result of the Michelson-Morley experiment and the constancy of the one-way speed of light in the "rest system" are used to formulate a simple problem, to be solved by elementary geometry techniques using a pair of compasses and non-graduated rulers. The solution consists of a drawing allowing a direct visualization of all the fundamental…
NASA Astrophysics Data System (ADS)
Perez, R. J.; Shevalier, M.; Hutcheon, I.
2004-05-01
Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.
NASA Technical Reports Server (NTRS)
Smalley, Kurt B.; Tinker, Michael L.; Fischer, Richard T.
2001-01-01
This paper is written for the purpose of providing an introduction and set of guidelines for the use of a methodology for NASTRAN eigenvalue modeling of thin film inflatable structures. It is hoped that this paper will spare the reader from the problems and headaches the authors were confronted with during their investigation by presenting here not only an introduction and verification of the methodology, but also a discussion of the problems that this methodology can ensue. Our goal in this investigation was to verify the basic methodology through the creation and correlation of a simple model. An overview of thin film structures, their history, and their applications is given. Previous modeling work is then briefly discussed. An introduction is then given for the method of modeling. The specific mechanics of the method are then discussed in parallel with a basic discussion of NASTRAN s implementation of these mechanics. The problems encountered with the method are then given along with suggestions for their work-a-rounds. The methodology is verified through the correlation between an analytical model and modal test results of a thin film strut. Recommendations are given for the needed advancement of our understanding of this method and ability to accurately model thin film structures. Finally, conclusions are drawn regarding the usefulness of the methodology.
Ingram, David; Engelhardt, Christoph; Farron, Alain; Terrier, Alexandre; Müllhaupt, Philippe
2016-01-01
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Gog, Simon; Bader, Martin
2008-10-01
The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.
Age of first arrest varies by gambling status in a cohort of young adults
Martins, Silvia S.; Lee, Grace P.; Santaella, Julian; Liu, Weiwei; Ialongo, Nicholas S.; Storr, Carla L.
2015-01-01
Background and objectives To describe the association between social and problem gambling and first criminal arrest by age 23 in a cohort of urban, mainly African-American youth. Methods: Data for this study was derived from several annual interviews being completed on a community sample of 617 participants during late adolescence until age 23. Information on gambling status, engagement in deviant behaviors, illegal drug use, and arrest history were collected through yearly interviews. Analysis was carried out using Nelson-Aalen cumulative hazard models and simple and adjusted Cox proportional hazards models. Results More problem gamblers had been arrested before age 23 than social gamblers and non-gamblers, i.e. 65% of problem gamblers were arrested before age 23, compared to 38% of social gamblers and 24% non-gamblers. Social gambling was only significantly associated with the hazard of first arrest by age 23 in the unadjusted model (HR: 1.6, p<.001), but not after adjustment for covariates (HR: 1.1, p=0.47). Problem gambling was significantly associated with the hazard of first arrest by age 23 years in the unadjusted (HR: 3.6,p<.001) and adjusted models (HR:1.6, p=0.05). Conclusions and Scientific Significance Problem gambling was significantly associated with earlier age of being arrested. Dilution effects after adjustment for several deviant behaviors and illegal drug use by age 17 suggest that youth exposed to certain common factors may result in engagement in multiple risky behaviors, including problem gambling. Studies are needed to investigate the developmental pathways that lead to these combined behaviors among youth. PMID:24628694
Levy, David M; Peart, Sandra J
2008-06-01
We wish to deal with investigator bias in a statistical context. We sketch how a textbook solution to the problem of "outliers" which avoids one sort of investigator bias, creates the temptation for another sort. We write down a model of the approbation seeking statistician who is tempted by sympathy for client to violate the disciplinary standards. We give a simple account of one context in which we might expect investigator bias to flourish. Finally, we offer tentative suggestions to deal with the problem of investigator bias which follow from our account. As we have given a very sparse and stylized account of investigator bias, we ask what might be done to overcome this limitation.
Crash test for the Copenhagen problem.
Nagler, Jan
2004-06-01
The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems.
NASA Technical Reports Server (NTRS)
Gadi, Jagannath; Yalamanchili, Raj; Shahid, Mohammad
1995-01-01
The need for high efficiency components has grown significantly due to the expanding role of fiber optic communications for various applications. Integrated optics is in a state of metamorphosis and there are many problems awaiting solutions. One of the main problems being the lack of a simple and efficient method of coupling single-mode fibers to thin-film devices for integrated optics. In this paper, optical coupling between a single-mode fiber and a uniform and tapered thin-film waveguide is theoretically modeled and analyzed. A novel tapered structure presented in this paper is shown to produce perfect match for power transfer.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing
Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and racemore » classification.« less
Improving Predictions of Multiple Binary Models in ILP
2014-01-01
Despite the success of ILP systems in learning first-order rules from small number of examples and complexly structured data in various domains, they struggle in dealing with multiclass problems. In most cases they boil down a multiclass problem into multiple black-box binary problems following the one-versus-one or one-versus-rest binarisation techniques and learn a theory for each one. When evaluating the learned theories of multiple class problems in one-versus-rest paradigm particularly, there is a bias caused by the default rule toward the negative classes leading to an unrealistic high performance beside the lack of prediction integrity between the theories. Here we discuss the problem of using one-versus-rest binarisation technique when it comes to evaluating multiclass data and propose several methods to remedy this problem. We also illustrate the methods and highlight their link to binary tree and Formal Concept Analysis (FCA). Our methods allow learning of a simple, consistent, and reliable multiclass theory by combining the rules of the multiple one-versus-rest theories into one rule list or rule set theory. Empirical evaluation over a number of data sets shows that our proposed methods produce coherent and accurate rule models from the rules learned by the ILP system of Aleph. PMID:24696657
NASA Astrophysics Data System (ADS)
Hasanah, N.; Hayashi, Y.; Hirashima, T.
2017-02-01
Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
De Clercq, Etienne
2008-09-01
It is widely accepted that the development of electronic patient records, or even of a common electronic patient record, is one possible way to improve cooperation and data communication between nurses and physicians. Yet, little has been done so far to develop a common conceptual model for both medical and nursing patient records, which is a first challenge that should be met to set up a common electronic patient record. In this paper, we describe a problem-oriented conceptual model and we show how it may suit both nursing and medical perspectives in a hospital setting. We started from existing nursing theory and from an initial model previously set up for primary care. In a hospital pilot site, a multi-disciplinary team refined this model using one large and complex clinical case (retrospective study) and nine ongoing cases (prospective study). An internal validation was performed through hospital-wide multi-professional interviews and through discussions around a graphical user interface prototype. To assess the consistency of the model, a computer engineer specified it. Finally, a Belgian expert working group performed an external assessment of the model. As a basis for a common patient record we propose a simple problem-oriented conceptual model with two levels of meta-information. The model is mapped with current nursing theories and it includes the following concepts: "health care element", "health approach", "health agent", "contact", "subcontact" and "service". These concepts, their interrelationships and some practical rules for using the model are illustrated in this paper. Our results are compatible with ongoing standardization work at the Belgian and European levels. Our conceptual model is potentially a foundation for a multi-professional electronic patient record that is problem-oriented and therefore patient-centred.
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Singular perturbation analysis of AOTV-related trajectory optimization problems
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Bae, Gyoung H.
1990-01-01
The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
NASA Astrophysics Data System (ADS)
Colace, Francesco; de Santo, Massimo; Ferrandino, Salvatore
The last decade has witnessed an intense spread of computer networks that has been further accelerated with the introduction of wireless networks. Simultaneously with, this growth has increased significantly the problems of network management. Especially in small companies, where there is no provision of personnel assigned to these tasks, the management of such networks is often complex and malfunctions can have significant impacts on their businesses. A possible solution is the adoption of Simple Network Management Protocol. Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP has a big disadvantage: its simple design means that the information it deals with is neither detailed nor well organized enough to deal with the expanding modern networking requirements. Over the past years much efforts has been given to improve the lack of Simple Network Management Protocol and new frameworks has been developed: A promising approach involves the use of Ontology. This is the starting point of this paper where a novel approach to the network management based on the use of the Slow Intelligence System methodologies and Ontology based techniques is proposed. Slow Intelligence Systems is a general-purpose systems characterized by being able to improve performance over time through a process involving enumeration, propagation, adaptation, elimination and concentration. Therefore, the proposed approach aims to develop a system able to acquire, according to an SNMP standard, information from the various hosts that are in the managed networks and apply solutions in order to solve problems. To check the feasibility of this model first experimental results in a real scenario are showed.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
NASA Astrophysics Data System (ADS)
Prathap Reddy, K.
2016-11-01
An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.
Learning Orthographic Structure With Sequential Generative Neural Networks.
Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco
2016-04-01
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.
On the representability problem and the physical meaning of coarse-grained models
NASA Astrophysics Data System (ADS)
Wagner, Jacob W.; Dama, James F.; Durumeric, Aleksander E. P.; Voth, Gregory A.
2016-07-01
In coarse-grained (CG) models where certain fine-grained (FG, i.e., atomistic resolution) observables are not directly represented, one can nonetheless identify indirect the CG observables that capture the FG observable's dependence on CG coordinates. Often, in these cases it appears that a CG observable can be defined by analogy to an all-atom or FG observable, but the similarity is misleading and significantly undermines the interpretation of both bottom-up and top-down CG models. Such problems emerge especially clearly in the framework of the systematic bottom-up CG modeling, where a direct and transparent correspondence between FG and CG variables establishes precise conditions for consistency between CG observables and underlying FG models. Here we present and investigate these representability challenges and illustrate them via the bottom-up conceptual framework for several simple analytically tractable polymer models. The examples provide special focus on the observables of configurational internal energy, entropy, and pressure, which have been at the root of controversy in the CG literature, as well as discuss observables that would seem to be entirely missing in the CG representation but can nonetheless be correlated with CG behavior. Though we investigate these problems in the framework of systematic coarse-graining, the lessons apply to top-down CG modeling also, with crucial implications for simulation at constant pressure and surface tension and for the interpretations of structural and thermodynamic correlations for comparison to experiment.
A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology
NASA Astrophysics Data System (ADS)
March, Marisa Cristina
2018-01-01
A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.
ERIC Educational Resources Information Center
DeCesare, Tony
2012-01-01
With only some fear of oversimplification, the fundamental differences between Walter Lippmann and John Dewey that are of concern here can be introduced by giving attention to Lippmann's deceptively simple formulation of a central problem in democratic theory: "The environment is complex. Man's political capacity is simple. Can a bridge be built…
Computational models of the Posner simple and choice reaction time tasks
Feher da Silva, Carolina; Baldo, Marcus V. C.
2015-01-01
The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997
NASA Technical Reports Server (NTRS)
Ghil, M.
1980-01-01
A unified theoretical approach to both the four-dimensional assimilation of asynoptic data and the initialization problem is attempted. This approach relies on the derivation of certain relationships between geopotential tendencies and tendencies of the horizontal velocity field in primitive-equation models of atmospheric flow. The approach is worked out and analyzed in detail for some simple barotropic models. Certain independent results of numerical experiments for the time-continuous assimilation of real asynoptic meteorological data into a complex, baroclinic weather prediction model are discussed in the context of the present approach. Tentative inferences are drawn for practical assimilation procedures.
On the joint inversion of geophysical data for models of the coupled core-mantle system
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1991-01-01
Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.
NASA Astrophysics Data System (ADS)
Putterman, E.; Raz, O.
2008-11-01
We present a simple two-dimensional model of a "cat"—a body with zero angular momentum that can rotate itself with no external forces. The model is used to explain the nature of a gauge theory and to illustrate the importance of noncommutative operators. We compare the free-space cat in Newtonian mechanics and the same problem in Aristotelian mechanics at low Reynolds numbers (with the velocity proportional to the force rather than to the acceleration). This example shows the analogy between (angular) momentum in Newtonian mechanics and (torque) force in Aristotelian mechanics. We discuss a topological invariant common to the model in free space and at low Reynolds number.
Deflation of the cosmological constant associated with inflation and dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi, E-mail: geng@phys.nthu.edu.tw, E-mail: chungchi@mx.nthu.edu.tw
2016-06-01
In order to solve the fine-tuning problem of the cosmological constant, we propose a simple model with the vacuum energy non-minimally coupled to the inflaton field. In this model, the vacuum energy decays to the inflaton during pre-inflation and inflation eras, so that the cosmological constant effectively deflates from the Planck mass scale to a much smaller one after inflation and plays the role of dark energy in the late-time of the universe. We show that our deflationary scenario is applicable to arbitrary slow-roll inflation models. We also take two specific inflation potentials to illustrate our results.
NASA Astrophysics Data System (ADS)
Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.
2018-05-01
In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.
van Baarlen, Peter; van Belkum, Alex; Thomma, Bart P H J
2007-02-01
Relatively simple eukaryotic model organisms such as the genetic model weed plant Arabidopsis thaliana possess an innate immune system that shares important similarities with its mammalian counterpart. In fact, some human pathogens infect Arabidopsis and cause overt disease with human symptomology. In such cases, decisive elements of the plant's immune system are likely to be targeted by the same microbial factors that are necessary for causing disease in humans. These similarities can be exploited to identify elementary microbial pathogenicity factors and their corresponding targets in a green host. This circumvents important cost aspects that often frustrate studies in humans or animal models and, in addition, results in facile ethical clearance.
Scalar-tensor extension of the ΛCDM model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algoner, W.C.; Velten, H.E.S.; Zimdahl, W., E-mail: w.algoner@cosmo-ufes.org, E-mail: velten@pq.cnpq.br, E-mail: winfried.zimdahl@pq.cnpq.br
2016-11-01
We construct a cosmological scalar-tensor-theory model in which the Brans-Dicke type scalar Φ enters the effective (Jordan-frame) Hubble rate as a simple modification of the Hubble rate of the ΛCDM model. This allows us to quantify differences between the background dynamics of scalar-tensor theories and general relativity (GR) in a transparent and observationally testable manner in terms of one single parameter. Problems of the mapping of the scalar-field degrees of freedom on an effective fluid description in a GR context are discused. Data from supernovae, the differential age of old galaxies and baryon acoustic oscillations are shown to strongly limitmore » potential deviations from the standard model.« less
NASA Astrophysics Data System (ADS)
Holgate, J. T.; Coppins, M.
2018-04-01
Plasma-surface interactions are ubiquitous in the field of plasma science and technology. Much of the physics of these interactions can be captured with a simple model comprising a cold ion fluid and electrons which satisfy the Boltzmann relation. However, this model permits analytical solutions in a very limited number of cases. This paper presents a versatile and robust numerical implementation of the model for arbitrary surface geometries in cartesian and axisymmetric cylindrical coordinates. Specific examples of surfaces with sinusoidal corrugations, trenches, and hemi-ellipsoidal protrusions verify this numerical implementation. The application of the code to problems involving plasma-liquid interactions, plasma etching, and electron emission from the surface is discussed.
Heideman, Paul D.; Flores, K. Adryan; Sevier, Lu M.; Trouton, Kelsey E.
2017-01-01
Drawing by learners can be an effective way to develop memory and generate visual models for higher-order skills in biology, but students are often reluctant to adopt drawing as a study method. We designed a nonclassroom intervention that instructed introductory biology college students in a drawing method, minute sketches in folded lists (MSFL), and allowed them to self-assess their recall and problem solving, first in a simple recall task involving non-European alphabets and later using unfamiliar biology content. In two preliminary ex situ experiments, students had greater recall on the simple learning task, non-European alphabets with associated phonetic sounds, using MSFL in comparison with a preferred method, visual review (VR). In the intervention, students studying using MSFL and VR had ∼50–80% greater recall of content studied with MSFL and, in a subset of trials, better performance on problem-solving tasks on biology content. Eight months after beginning the intervention, participants had shifted self-reported use of drawing from 2% to 20% of study time. For a small subset of participants, MSFL had become a preferred study method, and 70% of participants reported continued use of MSFL. This brief, low-cost intervention resulted in enduring changes in study behavior. PMID:28495932
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357
A simple nonlinear model for the return to isotropy in turbulence
NASA Technical Reports Server (NTRS)
Sarkar, Sutanu; Speziale, Charles G.
1990-01-01
A quadratic nonlinear generalization of the linear Rotta model for the slow pressure-strain correlation of turbulence is developed. The model is shown to satisfy realizability and to give rise to no stable nontrivial equilibrium solutions for the anisotropy tensor in the case of vanishing mean velocity gradients. The absence of stable nontrivial equilibrium solutions is a necessary condition to ensure that the model predicts a return to isotropy for all relaxational turbulent flows. Both the phase space dynamics and the temporal behavior of the model are examined and compared against experimental data for the return to isotropy problem. It is demonstrated that the quadratic model successfully captures the experimental trends which clearly exhibit nonlinear behavior. Direct comparisons are also made with the predictions of the Rotta model and the Lumley model.
Application of simple adaptive control to water hydraulic servo cylinder system
NASA Astrophysics Data System (ADS)
Ito, Kazuhisa; Yamada, Tsuyoshi; Ikeo, Shigeru; Takahashi, Koji
2012-09-01
Although conventional model reference adaptive control (MRAC) achieves good tracking performance for cylinder control, the controller structure is much more complicated and has less robustness to disturbance in real applications. This paper discusses the use of simple adaptive control (SAC) for positioning a water hydraulic servo cylinder system. Compared with MRAC, SAC has a simpler and lower order structure, i.e., higher feasibility. The control performance of SAC is examined and evaluated on a water hydraulic servo cylinder system. With the recent increased concerns over global environmental problems, the water hydraulic technique using pure tap water as a pressure medium has become a new drive source comparable to electric, oil hydraulic, and pneumatic drive systems. This technique is also preferred because of its high power density, high safety against fire hazards in production plants, and easy availability. However, the main problems for precise control in a water hydraulic system are steady state errors and overshoot due to its large friction torque and considerable leakage flow. MRAC has been already applied to compensate for these effects, and better control performances have been obtained. However, there have been no reports on the application of SAC for water hydraulics. To make clear the merits of SAC, the tracking control performance and robustness are discussed based on experimental results. SAC is confirmed to give better tracking performance compared with PI control, and a control precision comparable to MRAC (within 10 μm of the reference position) and higher robustness to parameter change, despite the simple controller. The research results ensure a wider application of simple adaptive control in real mechanical systems.
Language functions in preterm-born children: a systematic review and meta-analysis.
van Noort-van der Spek, Inge L; Franken, Marie-Christine J P; Weisglas-Kuperus, Nynke
2012-04-01
Preterm-born children (<37 weeks' gestation) have higher rates of language function problems compared with term-born children. It is unknown whether these problems decrease, deteriorate, or remain stable over time. The goal of this research was to determine the developmental course of language functions in preterm-born children from 3 to 12 years of age. Computerized databases Embase, PubMed, Web of Knowledge, and PsycInfo were searched for studies published between January 1995 and March 2011 reporting language functions in preterm-born children. Outcome measures were simple language function assessed by using the Peabody Picture Vocabulary Test and complex language function assessed by using the Clinical Evaluation of Language Fundamentals. Pooled effect sizes (in terms of Cohen's d) and 95% confidence intervals (CI) for simple and complex language functions were calculated by using random-effects models. Meta-regression was conducted with mean difference of effect size as the outcome variable and assessment age as the explanatory variable. Preterm-born children scored significantly lower compared with term-born children on simple (d = -0.45 [95% CI: -0.59 to -0.30]; P < .001) and on complex (d = -0.62 [95% CI: -0.82 to -0.43]; P < .001) language function tests, even in the absence of major disabilities and independent of social economic status. For complex language function (but not for simple language function), group differences between preterm- and term-born children increased significantly from 3 to 12 years of age (slope = -0.05; P = .03). While growing up, preterm-born children have increasing difficulties with complex language function.
A survey of methods of feasible directions for the solution of optimal control problems
NASA Technical Reports Server (NTRS)
Polak, E.
1972-01-01
Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
Bayesian Estimation and Inference Using Stochastic Electronics
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326
Uniting Mandelbrot’s Noah and Joseph Effects in Toy Models of Natural Hazard Time Series
NASA Astrophysics Data System (ADS)
Credgington, D.; Watkins, N. W.; Chapman, S. C.; Rosenberg, S. J.; Sanchez, R.
2009-12-01
The forecasting of extreme events is a highly topical, cross-disciplinary problem. One aspect which is potentially tractable even when the events themselves are stochastic is the probability of a “burst” of a given size and duration, defined as the area between a time series and a constant threshold. Many natural time series depart from the simplest, Brownian, case and in the 1960s Mandelbrot developed the use of fractals to describe these departures. In particular he proposed two kinds of fractal model to capture the way in which natural data is often persistent in time (his “Joseph effect”, common in hydrology and exemplified by fractional Brownian motion) and/or prone to heavy tailed jumps (the “Noah effect”, typical of economic index time series, for which he gave Levy flights as an examplar). Much of the earlier modelling, however, has emphasised one of the Noah and Joseph parameters (the tail exponent mu and one derived from the temporal behaviour such as power spectral beta) at the other one's expense. I will describe work [1] in which we applied a simple self-affine stable model-linear fractional stable motion (LFSM)-which unifies both effects to better describe natural data, in this case from space physics. I will show how we have resolved some contradictions seen in earlier work, where purely Joseph or Noah descriptions had been sought. I will also show recent work [2] using numerical simulations of LFSM and simple analytic scaling arguments to study the problem of the area between a fractional Levy model time series and a threshold. [1] Watkins et al, Space Science Reviews [2005] [2] Watkins et al, Physical Review E [2009
Unique Results and Lessons Learned from the TSS Missions
NASA Technical Reports Server (NTRS)
Stone, Nobie H.
2016-01-01
In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. The OML equation for electron current collection by a positively biased body is simply: I is approximately equal to A x j(sub eo) x 2/v??(phi)(exp ½) where A is the area of the body and phi is the electric potential on the body with respect to the plasma. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.
The propagation of sound in narrow street canyons
NASA Astrophysics Data System (ADS)
Iu, K. K.; Li, K. M.
2002-08-01
This paper addresses an important problem of predicting sound propagation in narrow street canyons with width less than 10 m, which are commonly found in a built-up urban district. Major noise sources are, for example, air conditioners installed on building facades and powered mechanical equipment for repair and construction work. Interference effects due to multiple reflections from building facades and ground surfaces are important contributions in these complex environments. Although the studies of sound transmission in urban areas can be traced back to as early as the 1960s, the resulting mathematical and numerical models are still unable to predict sound fields accurately in city streets. This is understandable because sound propagation in city streets involves many intriguing phenomena such as reflections and scattering at the building facades, diffusion effects due to recessions and protrusions of building surfaces, geometric spreading, and atmospheric absorption. This paper describes the development of a numerical model for the prediction of sound fields in city streets. To simplify the problem, a typical city street is represented by two parallel reflecting walls and a flat impedance ground. The numerical model is based on a simple ray theory that takes account of multiple reflections from the building facades. The sound fields due to the point source and its images are summed coherently such that mutual interference effects between contributing rays can be included in the analysis. Indoor experiments are conducted in an anechoic chamber. Experimental data are compared with theoretical predictions to establish the validity and usefulness of this simple model. Outdoor experimental measurements have also been conducted to further validate the model. copyright 2002 Acoustical Society of America.
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Martin, Julien; Chamaille-Jammes, Simon; Nichols, James D.; Fritz, Herve; Hines, James E.; Fonnesbeck, Christopher J.; MacKenzie, Darryl I.; Bailey, Larissa L.
2010-01-01
The recent development of statistical models such as dynamic site occupancy models provides the opportunity to address fairly complex management and conservation problems with relatively simple models. However, surprisingly few empirical studies have simultaneously modeled habitat suitability and occupancy status of organisms over large landscapes for management purposes. Joint modeling of these components is particularly important in the context of management of wild populations, as it provides a more coherent framework to investigate the population dynamics of organisms in space and time for the application of management decision tools. We applied such an approach to the study of water hole use by African elephants in Hwange National Park, Zimbabwe. Here we show how such methodology may be implemented and derive estimates of annual transition probabilities among three dry-season states for water holes: (1) unsuitable state (dry water holes with no elephants); (2) suitable state (water hole with water) with low abundance of elephants; and (3) suitable state with high abundance of elephants. We found that annual rainfall and the number of neighboring water holes influenced the transition probabilities among these three states. Because of an increase in elephant densities in the park during the study period, we also found that transition probabilities from low abundance to high abundance states increased over time. The application of the joint habitat–occupancy models provides a coherent framework to examine how habitat suitability and factors that affect habitat suitability influence the distribution and abundance of organisms. We discuss how these simple models can further be used to apply structured decision-making tools in order to derive decisions that are optimal relative to specified management objectives. The modeling framework presented in this paper should be applicable to a wide range of existing data sets and should help to address important ecological, conservation, and management problems that deal with occupancy, relative abundance, and habitat suitability.
Light Higgsino and gluino in R-invariant direct Gauge mediation
NASA Astrophysics Data System (ADS)
Nagai, Ryo; Yokozaki, Norimi
2018-03-01
We provide a simple solution to the μ-Bμ problem in the "R-invariant direct gauge mediation model". With the solution, the Higgsino and gluino are predicted to be light as O (100) GeV and O (1) TeV, respectively. Those gluino and Higgsino can be accessible at the LHC and future collider experiments. Moreover, dangerous dimension five operators inducing rapid proton decays are naturally suppressed by the R-symmetry.
1981-03-01
is required. This may be fabricated as an optical rotator, a further lenticular waveplate or a simple glass lens. Since one now has a pair of lenses it...may not be a problem as the mathematical model does not take into account the astigmatic behaviour of both rod and waveplate. Since the possibility
J.H. Gove; D.Y. Hollinger; D.Y. Hollinger
2006-01-01
A dual unscented Kalman filter (UKF) was used to assimilate net CO2 exchange (NEE) data measured over a spruce-hemlock forest at the Howland AmeriFlux site in Maine, USA, into a simple physiological model for the purpose of filling gaps in an eddy flux time series. In addition to filling gaps in the measurement record, the UKF approach provides continuous estimates of...