Simple systems that exhibit self-directed replication
NASA Technical Reports Server (NTRS)
Reggia, James A.; Armentrout, Steven L.; Chou, Hui-Hsien; Peng, Yun
1993-01-01
Biological experience and intuition suggest that self-replication is an inherently complex phenomenon, and early cellular automata models support that conception. More recently, simpler computational models of self-directed replication called sheathed loops have been developed. It is shown here that 'unsheathing' these structures and altering certain assumptions about the symmetry of their components leads to a family of nontrivial self-replicating structures some substantially smaller and simpler than those previously reported. The dependence of replication time and transition function complexity on initial structure size, cell state symmetry, and neighborhood are examined. These results support the view that self-replication is not an inherently complex phenomenon but rather an emergent property arising from local interactions in systems that can be much simpler than is generally believed.
Mechatronics by Analogy and Application to Legged Locomotion
NASA Astrophysics Data System (ADS)
Ragusila, Victor
A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced and applied to designing a leg mechanism. The new methodology argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also studied. A series of simulations show that the dynamic behaviour of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently. The first stage of Mechatronics by Analogy is a method of extracting significant features of system dynamics through simpler models. The goal is to determine a set of simpler mechanisms with similar dynamic behaviour to that of the original system in various phases of its motion. A modular bond-graph representation of the system is determined, and subsequently simplified using two simplification algorithms. The first algorithm determines the relevant dynamic elements of the system for each phase of motion, and the second algorithm finds the simple mechanism described by the remaining dynamic elements. In addition to greatly simplifying the controller for the system, using simpler mechanisms with similar behaviour provides a greater insight into the dynamics of the system. This is seen in the second stage of the new methodology, which concurrently optimizes the simpler mechanisms together with a control system based on their dynamics. Once the optimal configuration of the simpler system is determined, the original mechanism is optimized such that its dynamic behaviour is analogous. It is shown that, if this analogy is achieved, the control system designed based on the simpler mechanisms can be directly implemented to the more complex system, and their dynamic behaviours are close enough for the system performance to be effectively the same. Finally it is shown that, for the employed objective of fast legged locomotion, the proposed methodology achieves a better design than Reduction-by-Feedback, a competing methodology that uses control layers to simplify the dynamics of the system.
Heuristics for the Hodgkin-Huxley system.
Hoppensteadt, Frank
2013-09-01
Hodgkin and Huxley (HH) discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data. Due to the complexity of this model, a variety of heuristics, including relaxation oscillator circuits and integrate-and-fire models, have been used to investigate activity in neurons, and these simpler models have been successful in suggesting experiments and explaining observations. Connections between most of the simpler models had not been made clear until recently. Shown here are connections between these heuristics and the full HH model. In particular, we study a new model (Type III circuit): It includes the van der Pol-based models; it can be approximated by a simple integrate-and-fire model; and it creates voltages and currents that correspond, respectively, to the h and V components of the HH system. Copyright © 2012 Elsevier Inc. All rights reserved.
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
Assessing alternative measures of wealth in health research.
Cubbin, Catherine; Pollack, Craig; Flaherty, Brian; Hayward, Mark; Sania, Ayesha; Vallone, Donna; Braveman, Paula
2011-05-01
We assessed whether it would be feasible to replace the standard measure of net worth with simpler measures of wealth in population-based studies examining associations between wealth and health. We used data from the 2004 Survey of Consumer Finances (respondents aged 25-64 years) and the 2004 Health and Retirement Survey (respondents aged 50 years or older) to construct logistic regression models relating wealth to health status and smoking. For our wealth measure, we used the standard measure of net worth as well as 9 simpler measures of wealth, and we compared results among the 10 models. In both data sets and for both health indicators, models using simpler wealth measures generated conclusions about the association between wealth and health that were similar to the conclusions generated by models using net worth. The magnitude and significance of the odds ratios were similar for the covariates in multivariate models, and the model-fit statistics for models using these simpler measures were similar to those for models using net worth. Our findings suggest that simpler measures of wealth may be acceptable in population-based studies of health.
Stupid Tutoring Systems, Intelligent Humans
ERIC Educational Resources Information Center
Baker, Ryan S.
2016-01-01
The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development,…
Getting SaaS-y. Why the sisters of Mercy Health System opted for on-demand portfolio management.
Carter, Jay
2011-03-01
Sisters of Mercy Health System chose the SaaS model as a simpler way to plan, execute, and monitor strategic business initiatives. It also provided something that was easy to use and offered quick time to value.
Feng, Jin-Mei; Sun, Jun; Xin, De-Dong; Wen, Jian-Fan
2012-01-01
5S rRNA is a highly conserved ribosomal component. Eukaryotic 5S rRNA and its associated proteins (5S rRNA system) have become very well understood. Giardia lamblia was thought by some researchers to be the most primitive extant eukaryote while others considered it a highly evolved parasite. Previous reports have indicated that some aspects of its 5S rRNA system are simpler than that of common eukaryotes. We here explore whether this is true to its entire system, and whether this simplicity is a primitive or parasitic feature. By collecting and confirming pre-existing data and identifying new data, we obtained almost complete datasets of the system of three isolates of G. lamblia, two other parasitic excavates (Trichomonas vaginalis, Trypanosoma cruzi), and one free-living one (Naegleria gruberi). After comprehensively comparing each aspect of the system among these excavates and also with those of archaea and common eukaryotes, we found all the three Giardia isolates to harbor a same simplified 5S rRNA system, which is not only much simpler than that of common eukaryotes but also the simplest one among those of these excavates, and is surprisingly very similar to that of archaea; we also found among these excavates the system in parasitic species is not necessarily simpler than that in free-living species, conversely, the system of free-living species is even simpler in some respects than those of parasitic ones. The simplicity of Giardia 5S rRNA system should be considered a primitive rather than parasitically-degenerated feature. Therefore, Giardia 5S rRNA system might be a primitive system that is intermediate between that of archaea and the common eukaryotic model system, and it may reflect the evolutionary history of the eukaryotic 5S rRNA system from the archaeal form. Our results also imply G. lamblia might be a primitive eukaryote with secondary parasitically-degenerated features.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Xin, De-Dong; Wen, Jian-Fan
2012-01-01
Background 5S rRNA is a highly conserved ribosomal component. Eukaryotic 5S rRNA and its associated proteins (5S rRNA system) have become very well understood. Giardia lamblia was thought by some researchers to be the most primitive extant eukaryote while others considered it a highly evolved parasite. Previous reports have indicated that some aspects of its 5S rRNA system are simpler than that of common eukaryotes. We here explore whether this is true to its entire system, and whether this simplicity is a primitive or parasitic feature. Methodology/Principal Findings By collecting and confirming pre-existing data and identifying new data, we obtained almost complete datasets of the system of three isolates of G. lamblia, two other parasitic excavates (Trichomonas vaginalis, Trypanosoma cruzi), and one free-living one (Naegleria gruberi). After comprehensively comparing each aspect of the system among these excavates and also with those of archaea and common eukaryotes, we found all the three Giardia isolates to harbor a same simplified 5S rRNA system, which is not only much simpler than that of common eukaryotes but also the simplest one among those of these excavates, and is surprisingly very similar to that of archaea; we also found among these excavates the system in parasitic species is not necessarily simpler than that in free-living species, conversely, the system of free-living species is even simpler in some respects than those of parasitic ones. Conclusion/Significance The simplicity of Giardia 5S rRNA system should be considered a primitive rather than parasitically-degenerated feature. Therefore, Giardia 5S rRNA system might be a primitive system that is intermediate between that of archaea and the common eukaryotic model system, and it may reflect the evolutionary history of the eukaryotic 5S rRNA system from the archaeal form. Our results also imply G. lamblia might be a primitive eukaryote with secondary parasitically-degenerated features. PMID:22685540
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
Interactive, process-oriented climate modeling with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2016-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.
CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2015-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.
AFC-Enabled Simplified High-Lift System Integration Study
NASA Technical Reports Server (NTRS)
Hartwich, Peter M.; Dickey, Eric D.; Sclafani, Anthony J.; Camacho, Peter; Gonzales, Antonio B.; Lawson, Edward L.; Mairs, Ron Y.; Shmilovich, Arvin
2014-01-01
The primary objective of this trade study report is to explore the potential of using Active Flow Control (AFC) for achieving lighter and mechanically simpler high-lift systems for transonic commercial transport aircraft. This assessment was conducted in four steps. First, based on the Common Research Model (CRM) outer mold line (OML) definition, two high-lift concepts were developed. One concept, representative of current production-type commercial transonic transports, features leading edge slats and slotted trailing edge flaps with Fowler motion. The other CRM-based design relies on drooped leading edges and simply hinged trailing edge flaps for high-lift generation. The relative high-lift performance of these two high-lift CRM variants is established using Computational Fluid Dynamics (CFD) solutions to the Reynolds-Averaged Navier-Stokes (RANS) equations for steady flow. These CFD assessments identify the high-lift performance that needs to be recovered through AFC to have the CRM variant with the lighter and mechanically simpler high-lift system match the performance of the conventional high-lift system. Conceptual design integration studies for the AFC-enhanced high-lift systems were conducted with a NASA Environmentally Responsible Aircraft (ERA) reference configuration, the so-called ERA-0003 concept. These design trades identify AFC performance targets that need to be met to produce economically feasible ERA-0003-like concepts with lighter and mechanically simpler high-lift designs that match the performance of conventional high-lift systems. Finally, technical challenges are identified associated with the application of AFC-enabled highlift systems to modern transonic commercial transports for future technology maturation efforts.
Hot cheese: a processed Swiss cheese model.
Li, Y; Thimbleby, H
2014-01-01
James Reason's classic Swiss cheese model is a vivid and memorable way to visualise how patient harm happens only when all system defences fail. Although Reason's model has been criticised for its simplicity and static portrait of complex systems, its use has been growing, largely because of the direct clarity of its simple and memorable metaphor. A more general, more flexible and equally memorable model of accident causation in complex systems is needed. We present the hot cheese model, which is more realistic, particularly in portraying defence layers as dynamic and active - more defences may cause more hazards. The hot cheese model, being more flexible, encourages deeper discussion of incidents than the simpler Swiss cheese model permits.
Modeling of Biometric Identification System Using the Colored Petri Nets
NASA Astrophysics Data System (ADS)
Petrosyan, G. R.; Ter-Vardanyan, L. A.; Gaboutchian, A. V.
2015-05-01
In this paper we present a model of biometric identification system transformed into Petri Nets. Petri Nets, as a graphical and mathematical tool, provide a uniform environment for modelling, formal analysis, and design of discrete event systems. The main objective of this paper is to introduce the fundamental concepts of Petri Nets to the researchers and practitioners, both from identification systems, who are involved in the work in the areas of modelling and analysis of biometric identification types of systems, as well as those who may potentially be involved in these areas. In addition, the paper introduces high-level Petri Nets, as Colored Petri Nets (CPN). In this paper the model of Colored Petri Net describes the identification process much simpler.
A computational approach to climate science education with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
Quality Schools, Quality Outcomes
ERIC Educational Resources Information Center
Australian Government Department of Education and Training, 2016
2016-01-01
A strong level of funding is important for Australia's school system. The Government has further committed to a new, simpler and fairer funding model that distributes this funding on the basis of need. However, while funding is important, evidence shows that what you do with that funding matters more. Despite significant funding growth in the past…
Chaining for Flexible and High-Performance Key-Value Systems
2012-09-01
store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of
A discrete control model of PLANT
NASA Technical Reports Server (NTRS)
Mitchell, C. M.
1985-01-01
A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Variational Integrators for Interconnected Lagrange-Dirac Systems
NASA Astrophysics Data System (ADS)
Parks, Helen; Leok, Melvin
2017-10-01
Interconnected systems are an important class of mathematical models, as they allow for the construction of complex, hierarchical, multiphysics, and multiscale models by the interconnection of simpler subsystems. Lagrange-Dirac mechanical systems provide a broad category of mathematical models that are closed under interconnection, and in this paper, we develop a framework for the interconnection of discrete Lagrange-Dirac mechanical systems, with a view toward constructing geometric structure-preserving discretizations of interconnected systems. This work builds on previous work on the interconnection of continuous Lagrange-Dirac systems (Jacobs and Yoshimura in J Geom Mech 6(1):67-98, 2014) and discrete Dirac variational integrators (Leok and Ohsawa in Found Comput Math 11(5), 529-562, 2011). We test our results by simulating some of the continuous examples given in Jacobs and Yoshimura (2014).
Finite-dimensional modeling of network-induced delays for real-time control systems
NASA Technical Reports Server (NTRS)
Ray, Asok; Halevi, Yoram
1988-01-01
In integrated control systems (ICS), a feedback loop is closed by the common communication channel, which multiplexes digital data from the sensor to the controller and from the controller to the actuator along with the data traffic from other control loops and management functions. Due to asynchronous time-division multiplexing in the network access protocols, time-varying delays are introduced in the control loop, which degrade the system dynamic performance and are a potential source of instability. The delayed control system is represented by a finite-dimensional, time-varying, discrete-time model which is less complex than the existing continuous-time models for time-varying delays; this approach allows for simpler schemes for analysis and simulation of the ICS.
Cook, Heather; Brennan, Kathleen; Azziz, Ricardo
2011-01-01
Objective To determine whether assessing the extent of terminal hair growth in a subset of the traditional 9 areas included in the modified Ferriman-Gallwey (mFG) score can serve as a simpler predictor of total body hirsutism when compared to the full scoring system, and to determine if this new model can accurately distinguish hirsute from non-hirsute women. Design Cross-sectional analysis Setting Two tertiary care academic referral centers. Patients 1951 patients presenting for symptoms of androgen excess. Interventions History and physical examination, including mFG score. Main Outcome Measures Total body hirsutism. Results A regression model using all nine body areas indicated that the combination of upper abdomen, lower abdomen and chin was the best predictor of the total full mFG score. Using this subset of three body areas is accurate in distinguishing true hirsute from non-hirsute women when defining true hirsutism as mFG>7. Conclusion Scoring terminal hair growth only on the chin and abdomen can serve as a simple, yet reliable predictor of total body hirsutism when compared to full body scoring using the traditional mFG system. PMID:21924716
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
Human sleep and circadian rhythms: a simple model based on two coupled oscillators.
Strogatz, S H
1987-01-01
We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.
Common sense reasoning about petroleum flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, S.
1981-02-01
This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less
Heat transfer correlations for multilayer insulation systems
NASA Astrophysics Data System (ADS)
Krishnaprakas, C. K.; Badari Narayana, K.; Dutta, Pradip
2000-01-01
Multilayer insulation (MLI) blankets are extensively used in spacecrafts as lightweight thermal protection systems. Heat transfer analysis of MLI is sometimes too complex to use in practical design applications. Hence, for practical engineering design purposes, it is necessary to have simpler procedures to evaluate the heat transfer rate through MLI. In this paper, four different empirical models for heat transfer are evaluated by fitting against experimentally observed heat flux through MLI blankets of various configurations, and the results are discussed.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Numerical treatment of free surface problems in ferrohydrodynamics
NASA Astrophysics Data System (ADS)
Lavrova, O.; Matthies, G.; Mitkova, T.; Polevikov, V.; Tobiska, L.
2006-09-01
The numerical treatment of free surface problems in ferrohydrodynamics is considered. Starting from the general model, special attention is paid to field-surface and flow-surface interactions. Since in some situations these feedback interactions can be partly or even fully neglected, simpler models can be derived. The application of such models to the numerical simulation of dissipative systems, rotary shaft seals, equilibrium shapes of ferrofluid drops, and pattern formation in the normal-field instability of ferrofluid layers is given. Our numerical strategy is able to recover solitary surface patterns which were discovered recently in experiments.
ModelPlex: Verified Runtime Validation of Verified Cyber-Physical System Models
2014-07-01
nondeterministic choice (〈∪〉), deterministic assignment (〈:=〉) and logical con- nectives (∧ r etc.) replace current facts with simpler ones or branch...By sequent proof rule ∃ r , this existentially quantified variable is instantiated with an arbitrary term θ, which is often a new logical variable...that is implicitly existentially quantified [27]. Weakening (Wr) removes facts that are no longer necessary. (〈∗〉) ∃X〈x :=X〉φ 〈x := ∗〉φ 1 (∃ r ) Γ ` φ(θ
On Roles of Models in Information Systems
NASA Astrophysics Data System (ADS)
Sølvberg, Arne
The increasing penetration of computers into all aspects of human activity makes it desirable that the interplay among software, data and the domains where computers are applied is made more transparent. An approach to this end is to explicitly relate the modeling concepts of the domains, e.g., natural science, technology and business, to the modeling concepts of software and data. This may make it simpler to build comprehensible integrated models of the interactions between computers and non-computers, e.g., interaction among computers, people, physical processes, biological processes, and administrative processes. This chapter contains an analysis of various facets of the modeling environment for information systems engineering. The lack of satisfactory conceptual modeling tools seems to be central to the unsatisfactory state-of-the-art in establishing information systems. The chapter contains a proposal for defining a concept of information that is relevant to information systems engineering.
NASA Astrophysics Data System (ADS)
Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar
2017-12-01
Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Systems Biology Perspectives on Minimal and Simpler Cells
Xavier, Joana C.; Patil, Kiran Raosaheb
2014-01-01
SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563
Sears, Clinton; Andersson, Zach; Cann, Meredith
2016-01-01
ABSTRACT Background: Supporting the diverse needs of people living with HIV (PLHIV) can help reduce the individual and structural barriers they face in adhering to antiretroviral treatment (ART). The Livelihoods and Food Security Technical Assistance II (LIFT) project sought to improve adherence in Malawi by establishing 2 referral systems linking community-based economic strengthening and livelihoods services to clinical health facilities. One referral system in Balaka district, started in October 2013, connected clients to more than 20 types of services while the other simplified approach in Kasungu and Lilongwe districts, started in July 2014, connected PLHIV attending HIV and nutrition support facilities directly to community savings groups. Methods: From June to July 2015, LIFT visited referral sites in Balaka, Kasungu, and Lilongwe districts to collect qualitative data on referral utility, the perceived association of referrals with client and household health and vulnerability, and the added value of the referral system as perceived by network member providers. We interviewed a random sample of 152 adult clients (60 from Balaka, 57 from Kasungu, and 35 from Lilongwe) who had completed their referral. We also conducted 2 focus group discussions per district with network providers. Findings: Clients in all 3 districts indicated their ability to save money had improved after receiving a referral, although the percentage was higher among clients in the simplified Kasungu and Lilongwe model than the more complex Balaka model (85.6% vs. 56.0%, respectively). Nearly 70% of all clients interviewed had HIV infection; 72.7% of PLHIV in Balaka and 95.7% of PLHIV in Kasungu and Lilongwe credited referrals for helping them stay on their ART. After the referral, 76.0% of clients in Balaka and 92.3% of clients in Kasungu and Lilongwe indicated they would be willing to spend their savings on health costs. The more diverse referral network and use of an mHealth app to manage data in Balaka hindered provider uptake of the system, while the simpler system in Kasungu and Lilongwe, which included only 2 referral options and use of a paper-based referral tool, seemed simpler for the providers to manage. Conclusions: Participation in the referral systems was perceived positively by clients and providers in both models, but more so in Kasungu and Lilongwe where the referral process was simpler. Future referral networks should consider limiting the number of service options included in the network and simplify referral tools to the extent possible to facilitate uptake among network providers. PMID:28031300
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
A Reference Architecture for Space Information Management
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.
2006-01-01
We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.
The Epistemic Representation of Information Flow Security in Probabilistic Systems
1995-06-01
The new characterization also means that our security crite- rion is expressible in a simpler logic and model. 1 Introduction Multilevel security is...ber generator) during its execution. Such probabilistic choices are useful in a multilevel security context for Supported by grants HKUST 608/94E from... 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
Towards Run-time Assurance of Advanced Propulsion Algorithms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy
2014-01-01
This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
Numerical modeling of reflux solar receivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, R.E. Jr.
1993-05-01
Using reflux solar receivers to collect solar energy for dish-Stirling electric power generation systems is presently being investigated by several organizations, including Sandia National Laboratories, Albuquerque, N. Mex. In support of this program, Sandia has developed two numerical models describing the thermal performance of pool-boiler and heat-pipe reflux receivers. Both models are applicable to axisymmetric geometries and they both consider the radiative and convective energy transfer within the receiver cavity, the conductive and convective energy transfer from the receiver housing, and the energy transfer to the receiver working fluid. The primary difference between the models is the level of detailmore » in modeling the heat conduction through the receiver walls. The more detailed model uses a two-dimensional finite control volume method, whereas the simpler model uses a one-dimensional thermal resistance approach. The numerical modeling concepts presented are applicable to conventional tube-type solar receivers, as well as to reflux receivers. Good agreement between the two models is demonstrated by comparing the predicted and measured performance of a pool-boiler reflux receiver being tested at Sandia. For design operating conditions, the receiver thermal efficiencies agree within 1 percent and the average receiver cavity temperature within 1.3 percent. The thermal efficiency and receiver temperatures predicted by the simpler thermal resistance model agree well with experimental data from on-sun tests of the Sandia reflux pool-boiler receiver. An analysis of these comparisons identifies several plausible explanations for the differences between the predicted results and the experimental data.« less
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
An electrostatically and a magnetically confined electron gun lens system
NASA Technical Reports Server (NTRS)
Bernius, Mark T.; Man, Kin F.; Chutjian, Ara
1988-01-01
Focal properties, electron trajectory calculations, and geometries are given for two electron 'gun' lens systems that have a variety of applications in, for example, electron-neutral and electron-ion scattering experiments. One nine-lens system utilizes only electrostatic confinement and is capable of focusing electrons onto a fixed target with extremely small divergence angles, over a range of final energies 1-790 eV. The second gun lens system is a simpler three-lens system suitable for use in a uniform, solenoidal magnetic field. While the focusing properties of such a magnetically confined lens systenm are simpler to deal with, the system does illustrate features of electron extraction and Brillouin flow that have not been suitably emphasized in the literature.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
Climate Model Ensemble Methodology: Rationale and Challenges
NASA Astrophysics Data System (ADS)
Vezer, M. A.; Myrvold, W.
2012-12-01
A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.
Systems biology perspectives on minimal and simpler cells.
Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel
2014-09-01
The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer
NASA Astrophysics Data System (ADS)
Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.
The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.
Real-time advanced spinal surgery via visible patient model and augmented reality system.
Wu, Jing-Ren; Wang, Min-Liang; Liu, Kai-Che; Hu, Ming-Hsien; Lee, Pei-Yuan
2014-03-01
This paper presents an advanced augmented reality system for spinal surgery assistance, and develops entry-point guidance prior to vertebroplasty spinal surgery. Based on image-based marker detection and tracking, the proposed camera-projector system superimposes pre-operative 3-D images onto patients. The patients' preoperative 3-D image model is registered by projecting it onto the patient such that the synthetic 3-D model merges with the real patient image, enabling the surgeon to see through the patients' anatomy. The proposed method is much simpler than heavy and computationally challenging navigation systems, and also reduces radiation exposure. The system is experimentally tested on a preoperative 3D model, dummy patient model and animal cadaver model. The feasibility and accuracy of the proposed system is verified on three patients undergoing spinal surgery in the operating theater. The results of these clinical trials are extremely promising, with surgeons reporting favorably on the reduced time of finding a suitable entry point and reduced radiation dose to patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.
The Effect of Sensor Performance on Safe Minefield Transit
2002-12-01
the results of the simpler model are not good approximations of the results obtained with the more complex model, suggesting that even greater complexity in maneuver modeling may be desirable for some purposes.
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Astrophysics Data System (ADS)
Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.
2017-12-01
Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
NASA Astrophysics Data System (ADS)
Hajigeorgiou, Photos G.
2016-12-01
An analytical model for the diatomic potential energy function that was recently tested as a universal function (Hajigeorgiou, 2010) has been further modified and tested as a suitable model for direct-potential-fit analysis. Applications are presented for the ground electronic states of three diatomic molecules: oxygen, carbon monoxide, and hydrogen fluoride. The adjustable parameters of the extended Lennard-Jones potential model are determined through nonlinear regression by fits to calculated rovibrational energy term values or experimental spectroscopic line positions. The model is shown to lead to reliable, compact and simple representations for the potential energy functions of these systems and could therefore be classified as a suitable and attractive model for direct-potential-fit analysis.
All-atom ensemble modeling to analyze small angle X-ray scattering of glycosylated proteins
Guttman, Miklos; Weinkam, Patrick; Sali, Andrej; Lee, Kelly K.
2013-01-01
Summary The flexible and heterogeneous nature of carbohydrate chains often renders glycoproteins refractory to traditional structure determination methods. Small Angle X-ray scattering (SAXS) can be a useful tool for obtaining structural information of these systems. All-atom modeling of glycoproteins with flexible glycan chains was applied to interpret the solution SAXS data for a set of glycoproteins. For simpler systems (single glycan, with a well defined protein structure), all-atom modeling generates models in excellent agreement with the scattering pattern, and reveals the approximate spatial occupancy of the glycan chain in solution. For more complex systems (several glycan chains, or unknown protein substructure), the approach can still provide insightful models, though the orientations of glycans become poorly determined. Ab initio shape reconstructions appear to capture the global morphology of glycoproteins, but in most cases offer little information about glycan spatial occupancy. The all-atom modeling methodology is available as a webserver at http://modbase.compbio.ucsf.edu/allosmod-foxs. PMID:23473666
Zhou, Kun; Gao, Chun-Fang; Zhao, Yun-Peng; Liu, Hai-Lin; Zheng, Rui-Dan; Xian, Jian-Chun; Xu, Hong-Tao; Mao, Yi-Min; Zeng, Min-De; Lu, Lun-Gen
2010-09-01
In recent years, a great interest has been dedicated to the development of noninvasive predictive models to substitute liver biopsy for fibrosis assessment and follow-up. Our aim was to provide a simpler model consisting of routine laboratory markers for predicting liver fibrosis in patients chronically infected with hepatitis B virus (HBV) in order to optimize their clinical management. Liver fibrosis was staged in 386 chronic HBV carriers who underwent liver biopsy and routine laboratory testing. Correlations between routine laboratory markers and fibrosis stage were statistically assessed. After logistic regression analysis, a novel predictive model was constructed. This S index was validated in an independent cohort of 146 chronic HBV carriers in comparison to the SLFG model, Fibrometer, Hepascore, Hui model, Forns score and APRI using receiver operating characteristic (ROC) curves. The diagnostic values of each marker panels were better than single routine laboratory markers. The S index consisting of gamma-glutamyltransferase (GGT), platelets (PLT) and albumin (ALB) (S-index: 1000 x GGT/(PLT x ALB(2))) had a higher diagnostic accuracy in predicting degree of fibrosis than any other mathematical model tested. The areas under the ROC curves (AUROC) were 0.812 and 0.890 for predicting significant fibrosis and cirrhosis in the validation cohort, respectively. The S index, a simpler mathematical model consisting of routine laboratory markers predicts significant fibrosis and cirrhosis in patients with chronic HBV infection with a high degree of accuracy, potentially decreasing the need for liver biopsy.
NASA Astrophysics Data System (ADS)
Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal
2016-10-01
The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Rose, W. C.
1973-01-01
The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.
A LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR (LASSO) FOR NONLINEAR SYSTEM IDENTIFICATION
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Lofberg, Johan; Brenner, Martin J.
2006-01-01
Identification of parametric nonlinear models involves estimating unknown parameters and detecting its underlying structure. Structure computation is concerned with selecting a subset of parameters to give a parsimonious description of the system which may afford greater insight into the functionality of the system or a simpler controller design. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear systems. The LASSO minimises the residual sum of squares by the addition of a 1 penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. The performance of this LASSO structure detection method was evaluated by using it to estimate the structure of a nonlinear polynomial model. Applicability of the method to more complex systems such as those encountered in aerospace applications was shown by identifying a parsimonious system description of the F/A-18 Active Aeroelastic Wing using flight test data.
Water balance models in one-month-ahead streamflow forecasting
Alley, William M.
1985-01-01
Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.
Cooperation of catalysts and templates
NASA Technical Reports Server (NTRS)
White, D. H.; Kanavarioti, A.; Nibley, C. W.; Macklin, J. W.
1986-01-01
In order to understand how self-reproducing molecules could have originated on the primitive Earth or extraterrestrial bodies, it would be useful to find laboratory models of simple molecules which are able to carry out processes of catalysis and templating. Furthermore, it may be anticipated that systems in which several components are acting cooperatively to catalyze each other's synthesis will have different behavior with respect to natural selection than those of purely replicating systems. As the major focus of this work, laboratory models are devised to study the influence of short peptide catalysts on template reactions which produce oligonucleotides or additional peptides. Such catalysts could have been the earliest protoenzymes of selective advantage produced by replicating oligonucleotides. Since this is a complex problem, simpler systems are also studied which embody only one aspect at a time, such as peptide formation with and without a template, peptide catalysis of nontemplated peptide synthesis, and model reactions for replication of the type pioneered by Orgel.
High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime
NASA Astrophysics Data System (ADS)
Turansky, Craig P.
The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.
Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.
Maynard, Greg; Jenkins, Ian H; Merli, Geno J
2013-10-01
Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.
2018-06-01
Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.
Temperature modelling and prediction for activated sludge systems.
Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K
2009-01-01
Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.
Study of ICRF wave propagation and plasma coupling efficiency in a linear magnetic mirror device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, S.Y.
1991-07-01
Ion Cyclotron Range of Frequency (ICRF) wave propagation in an inhomogeneous axial magnetic field in a cylindrical plasma-vacuum system has historically been inadequately modelled. Previous works either sacrifice the cylindrical geometry in favor of a simpler slab geometry, concentrate on the resonance region, use a single mode to represent the entire field structure, or examine only radial propagation. This thesis performs both analytical and computational studies to model the ICRF wave-plasma coupling and propagation problem. Experimental analysis is also conducted to compare experimental results with theoretical predictions. Both theoretical as well as experimental analysis are undertaken as part of themore » thesis. The theoretical studies simulate the propagation of ICRF waves in an axially inhomogeneous magnetic field and in cylindrical geometry. Two theoretical analysis are undertaken - an analytical study and a computational study. The analytical study treats the inhomogeneous magnetic field by transforming the (r,z) coordinate into another coordinate system ({rho},{xi}) that allows the solution of the fields with much simpler boundaries. The plasma fields are then Fourier transformed into two coupled convolution-integral equations which are then differenced and solved for both the perpendicular mode number {alpha} as well as the complete EM fields. The computational study involves a multiple eigenmode computational analysis of the fields that exist within the plasma-vacuum system. The inhomogeneous axial field is treated by dividing the geometry into a series of transverse axial slices and using a constant dielectric tensor in each individual slice. The slices are then connected by longitudinal boundary conditions.« less
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence.
Korkali, Mert; Veneman, Jason G; Tivnan, Brian F; Bagrow, James P; Hines, Paul D H
2017-03-20
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a "smart" power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence
NASA Astrophysics Data System (ADS)
Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.
2017-03-01
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.
Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence
Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.
2017-01-01
Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained. PMID:28317835
Preparation of name and address data for record linkage using hidden Markov models
Churches, Tim; Christen, Peter; Lim, Kim; Zhu, Justin Xi
2002-01-01
Background Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). Methods HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. Results Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. Conclusion Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve. PMID:12482326
Appleton, D J; Rand, J S; Sunvold, G D
2005-06-01
The objective of this study was to compare simpler indices of insulin sensitivity with the minimal model-derived insulin sensitivity index to identify a simple and reliable alternative method for assessing insulin sensitivity in cats. In addition, we aimed to determine whether this simpler measure or measures showed consistency of association across differing body weights and glucose tolerance levels. Data from glucose tolerance and insulin sensitivity tests performed in 32 cats with varying body weights (underweight to obese), including seven cats with impaired glucose tolerance, were used to assess the relationship between Bergman's minimal model-derived insulin sensitivity index (S(I)), and various simpler measures of insulin sensitivity. The most useful overall predictors of insulin sensitivity were basal plasma insulin concentrations and the homeostasis model assessment (HOMA), which is the product of basal glucose and insulin concentrations divided by 22.5. It is concluded that measurement of plasma insulin concentrations in cats with food withheld for 24 h, in conjunction with HOMA, could be used in clinical research projects and by practicing veterinarians to screen for reduced insulin sensitivity in cats. Such cats may be at increased risk of developing impaired glucose tolerance and type 2 diabetes mellitus. Early detection of these cats would enable preventative intervention programs such as weight reduction, increased physical activity and dietary modifications to be instigated.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report, a series of viewgraphs, outlines experiments on the flow physics of confluent boundary layers for high lift systems. The design objective is to design high lift systems with improved C(sub Lmax) for landing approach and improved take-off L/D and simultaneously reduce acquisition and maintenance costs. In effect, achieve improved performance with simpler designs. The research objectives include: establish the role of confluent boundary layer flow physics in high-lift production; contrast confluent boundary layer structure for optimum and non-optimum C(sub L) cases; formation of a high quality, detailed archival data base for CFD/modeling; and examination of the role of relaminarization and streamline curvature.
Influence of polysaccharides on wine protein aggregation.
Jaeckels, Nadine; Meier, Miriam; Dietrich, Helmut; Will, Frank; Decker, Heinz; Fronk, Petra
2016-06-01
Polysaccharides are the major high-molecular weight components of wines. In contrast, proteins occur only in small amounts in wine, but contribute to haze formation. The detailed mechanism of aggregation of these proteins, especially in combination with other wine components, remains unclear. This study demonstrates the different aggregation behavior between a buffer and a model wine system by dynamic light scattering. Arabinogalactan-protein, for example, shows an increased aggregation in the model wine system, while in the buffer system a reducing effect is observed. Thus, we could show the importance to examine the behavior of wine additives under conditions close to reality, instead of simpler buffer systems. Additional experiments on melting points of wine proteins reveal that only some isoforms of thaumatin-like proteins and chitinases are involved in haze formation. We can confirm interactions between polysaccharides and proteins, but none of these polysaccharides is able to prevent haze in wine. Copyright © 2016. Published by Elsevier Ltd.
Nonlinear Dynamic Modeling and Controls Development for Supersonic Propulsion System Research
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George; Paxson, Daniel E.; Stuber, Eric; Woolwine, Kyle
2012-01-01
This paper covers the propulsion system component modeling and controls development of an integrated nonlinear dynamic simulation for an inlet and engine that can be used for an overall vehicle (APSE) model. The focus here is on developing a methodology for the propulsion model integration, which allows for controls design that prevents inlet instabilities and minimizes the thrust oscillation experienced by the vehicle. Limiting thrust oscillations will be critical to avoid exciting vehicle aeroelastic modes. Model development includes both inlet normal shock position control and engine rotor speed control for a potential supersonic commercial transport. A loop shaping control design process is used that has previously been developed for the engine and verified on linear models, while a simpler approach is used for the inlet control design. Verification of the modeling approach is conducted by simulating a two-dimensional bifurcated inlet and a representative J-85 jet engine previously used in a NASA supersonics project. Preliminary results are presented for the current supersonics project concept variable cycle turbofan engine design.
Holomorphic solutions of the susy Grassmannian σ-model and gauge invariance
NASA Astrophysics Data System (ADS)
Hussin, V.; Lafrance, M.; Yurduşen, İ.; Zakrzewski, W. J.
2018-05-01
We study the gauge invariance of the supersymmetric Grassmannian sigma model . It is richer then its purely bosonic submodel and we show how to use it in order to reduce some constant curvature holomorphic solutions of the model into simpler expressions.
NASA Astrophysics Data System (ADS)
Lawrence, G.; Barnard, C.; Viswanathan, V.
1986-11-01
Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
NASA Astrophysics Data System (ADS)
Holzapfel, Wilfried B.
2018-06-01
Thermodynamic modeling of fluids (liquids and gases) uses mostly series expansions which diverge at low temperatures and do not fit to the behavior of metastable quenched fluids (amorphous, glass like solids). These divergences are removed in the present approach by the use of reasonable forms for the "cold" potential energy and for the thermal pressure of the fluid system. Both terms are related to the potential energy and to the thermal pressure of the crystalline phase in a coherent way, which leads to simpler and non diverging series expansions for the thermal pressure and thermal energy of the fluid system. Data for solid and fluid argon are used to illustrate the potential of the present approach.
Spontaneous emergence of milling (vortex state) in a Vicsek-like model
NASA Astrophysics Data System (ADS)
Costanzo, A.; Hemelrijk, C. K.
2018-04-01
Collective motion is of interest to laymen and scientists in different fields. In groups of animals, many patterns of collective motion arise such as polarized schools and mills (i.e. circular motion). Collective motion can be generated in computational models of different degrees of complexity. In these models, moving individuals coordinate with others nearby. In the more complex models, individuals attract each other, aligning their headings, and avoiding collisions. Simpler models may include only one or two of these types of interactions. The collective pattern that interests us here is milling, which is observed in many animal species. It has been reproduced in the more complex models, but not in simpler models that are based only on alignment, such as the well-known Vicsek model. Our aim is to provide insight in the minimal conditions required for milling by making minimal modifications to the Vicsek model. Our results show that milling occurs when both the field of view and the maximal angular velocity are decreased. Remarkably, apart from milling, our minimal model also exhibits many of the other patterns of collective motion observed in animal groups.
Modeling and measurement of fault-tolerant multiprocessors
NASA Technical Reports Server (NTRS)
Shin, K. G.; Woodbury, M. H.; Lee, Y. H.
1985-01-01
The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented.
FINITE ELEMENT MODEL FOR TIDAL AND RESIDUAL CIRCULATION.
Walters, Roy A.
1986-01-01
Harmonic decomposition is applied to the shallow water equations, thereby creating a system of equations for the amplitude of the various tidal constituents and for the residual motions. The resulting equations are elliptic in nature, are well posed and in practice are shown to be numerically well-behaved. There are a number of strategies for choosing elements: the two extremes are to use a few high-order elements with continuous derivatives, or to use a large number of simpler linear elements. In this paper simple linear elements are used and prove effective.
Analysis of pressure spectra measurements in a ducted combustion system. Ph.D. Thesis - Toledo Univ.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1980-01-01
Combustion noise propagation in an operating ducted liquid fuel combustion system is studied in relation to the development of combustion noise prediction and suppression techniques. The presence of combustor emissions in the duct is proposed as the primary mechanism producing the attenuation and dispersion of combustion noise propagating in an operating liquid fuel combustion system. First, a complex mathematical model for calculating attenuation and dispersion taking into account mass transfer, heat transfer, and viscosity effects due to the presence of liquid fuel droplets or solid soot particles is discussed. Next, a simpler single parameter model for calculating pressure auto-spectra and cross-spectra which takes into account dispersion and attenuation due to heat transfer between solid soot particles and air is developed. Then, auto-spectra and cross-spectra obtained from internal pressure measurements in a combustion system consisting of a J-47 combustor can, a spool piece, and a long duct are presented. Last, analytical results obtained with the single parameter model are compared with the experimental measurements. The single parameter model results are shown to be in excellent agreement with the measurements.
Analysis of pressure spectra measurements in a ducted combustion system
NASA Astrophysics Data System (ADS)
Miles, J. H.
1980-11-01
Combustion noise propagation in an operating ducted liquid fuel combustion system is studied in relation to the development of combustion noise prediction and suppression techniques. The presence of combustor emissions in the duct is proposed as the primary mechanism producing the attenuation and dispersion of combustion noise propagating in an operating liquid fuel combustion system. First, a complex mathematical model for calculating attenuation and dispersion taking into account mass transfer, heat transfer, and viscosity effects due to the presence of liquid fuel droplets or solid soot particles is discussed. Next, a simpler single parameter model for calculating pressure auto-spectra and cross-spectra which takes into account dispersion and attenuation due to heat transfer between solid soot particles and air is developed. Then, auto-spectra and cross-spectra obtained from internal pressure measurements in a combustion system consisting of a J-47 combustor can, a spool piece, and a long duct are presented. Last, analytical results obtained with the single parameter model are compared with the experimental measurements. The single parameter model results are shown to be in excellent agreement with the measurements.
Planner-Based Control of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott
2005-01-01
The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.
Towards a climate-dependent paradigm of ammonia emission and deposition
Existing descriptions of bi-directional ammonia (NH3) land–atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale ...
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iglesias, C. A.; Sterne, P. A.
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
Iglesias, C. A.; Sterne, P. A.
2018-03-16
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
NASA Astrophysics Data System (ADS)
Faribault, Alexandre; Tschirhart, Hugo; Muller, Nicolas
2016-05-01
In this work we present a determinant expression for the domain-wall boundary condition partition function of rational (XXX) Richardson-Gaudin models which, in addition to N-1 spins \\frac{1}{2}, contains one arbitrarily large spin S. The proposed determinant representation is written in terms of a set of variables which, from previous work, are known to define eigenstates of the quantum integrable models belonging to this class as solutions to quadratic Bethe equations. Such a determinant can be useful numerically since systems of quadratic equations are much simpler to solve than the usual highly nonlinear Bethe equations. It can therefore offer significant gains in stability and computation speed.
Optical solitons in nematic liquid crystals: model with saturation effects
NASA Astrophysics Data System (ADS)
Borgna, Juan Pablo; Panayotaros, Panayotis; Rial, Diego; de la Vega, Constanza Sánchez F.
2018-04-01
We study a 2D system that couples a Schrödinger evolution equation to a nonlinear elliptic equation and models the propagation of a laser beam in a nematic liquid crystal. The nonlinear elliptic equation describes the response of the director angle to the laser beam electric field. We obtain results on well-posedness and solitary wave solutions of this system, generalizing results for a well-studied simpler system with a linear elliptic equation for the director field. The analysis of the nonlinear elliptic problem shows the existence of an isolated global branch of solutions with director angles that remain bounded for arbitrary electric field. The results on the director equation are also used to show local and global existence, as well as decay for initial conditions with sufficiently small L 2-norm. For sufficiently large L 2-norm we show the existence of energy minimizing optical solitons with radial, positive and monotone profiles.
A Toy Model of Electrodynamics in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2007-01-01
A model is presented that describes a scalar field interacting with a point particle in (1+1) dimensions. The model exhibits many of the same phenomena that appear in classical electrodynamics, such as radiation and radiation damping, yet has a much simpler mathematical structure. By studying these phenomena in a highly simplified model, the…
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimčík, Ctirad
2015-12-15
We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
NASA Astrophysics Data System (ADS)
Hopp, L.; Ivanov, V. Y.
2010-12-01
There is still a debate in rainfall-runoff modeling over the advantage of using three-dimensional models based on partial differential equations describing variably saturated flow vs. models with simpler infiltration and flow routing algorithms. Fully explicit 3D models are computationally demanding but allow the representation of spatially complex domains, heterogeneous soils, conditions of ponded infiltration, and solute transport, among others. Models with simpler infiltration and flow routing algorithms provide faster run times and are likely to be more versatile in the treatment of extreme conditions such as soil drying but suffer from underlying assumptions and ad-hoc parameterizations. In this numerical study, we explore the question of whether these two model strategies are competing approaches or if they complement each other. As a 3D physics-based model we use HYDRUS-3D, a finite element model that numerically solves the Richards equation for variably-saturated water flow. As an example of a simpler model, we use tRIBS+VEGGIE that solves the 1D Richards equation for vertical flow and applies Dupuit-Forchheimer approximation for saturated lateral exchange and gravity-driven flow for unsaturated lateral exchange. The flow can be routed using either the D-8 (steepest descent) or D-infinity flow routing algorithms. We study lateral subsurface stormflow and moisture dynamics at the hillslope-scale, using a zero-order basin topography, as a function of storm size, antecedent moisture conditions and slope angle. The domain and soil characteristics are representative of a forested hillslope with conductive soils in a humid environment, where the major runoff generating process is lateral subsurface stormflow. We compare spatially integrated lateral subsurface flow at the downslope boundary as well as spatial patterns of soil moisture. We illustrate situations where both model approaches perform equally well and identify conditions under which the application of a fully-explicit 3D model may be required for a realistic description of the hydrologic response.
Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.
Cheng, Ching-An; Huang, Han-Pang
2016-12-01
We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.
Machine learning approaches to the social determinants of health in the health and retirement study.
Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David
2018-04-01
Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.
NASA Astrophysics Data System (ADS)
Matsumoto, Jun; Okaya, Shunichi; Igoh, Hiroshi; Kawaguchi, Junichiro
2017-04-01
A new propellant feed system referred to as a self-pressurized feed system is proposed for liquid rocket engines. The self-pressurized feed system is a type of gas-pressure feed system; however, the pressurization source is retained in the liquid state to reduce tank volume. The liquid pressurization source is heated and gasified using heat exchange from the hot propellant using a regenerative cooling strategy. The liquid pressurization source is raised to critical pressure by a pressure booster referred to as a charger in order to avoid boiling and improve the heat exchange efficiency. The charger is driven by a part of the generated pressurization gas using a closed-loop self-pressurized feed system. The purpose of this study is to propose a propellant feed system that is lighter and simpler than traditional gas pressure feed systems. The proposed system can be applied to all liquid rocket engines that use the regenerative cooling strategy. The concept and mathematical models of the self-pressurized feed system are presented first. Experiment results for verification are then shown and compared with the mathematical models.
Manifold Coal-Slurry Transport System
NASA Technical Reports Server (NTRS)
Liddle, S. G.; Estus, J. M.; Lavin, M. L.
1986-01-01
Feeding several slurry pipes into main pipeline reduces congestion in coal mines. System based on manifold concept: feeder pipelines from each working entry joined to main pipeline that carries coal slurry out of panel and onto surface. Manifold concept makes coal-slurry haulage much simpler than existing slurry systems.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Ward identities and combinatorics of rainbow tensor models
NASA Astrophysics Data System (ADS)
Itoyama, H.; Mironov, A.; Morozov, A.
2017-06-01
We discuss the notion of renormalization group (RG) completion of non-Gaussian Lagrangians and its treatment within the framework of Bogoliubov-Zimmermann theory in application to the matrix and tensor models. With the example of the simplest non-trivial RGB tensor theory (Aristotelian rainbow), we introduce a few methods, which allow one to connect calculations in the tensor models to those in the matrix models. As a byproduct, we obtain some new factorization formulas and sum rules for the Gaussian correlators in the Hermitian and complex matrix theories, square and rectangular. These sum rules describe correlators as solutions to finite linear systems, which are much simpler than the bilinear Hirota equations and the infinite Virasoro recursion. Search for such relations can be a way to solving the tensor models, where an explicit integrability is still obscure.
Studies of the effects of curvature on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Srinivasan, Ram; Reynolds, Robert S.; White, Craig D.
1992-01-01
An analytical program was conducted using both three-dimensional numerical and empirical models to investigate the effects of transition liner curvature on the mixing of jets injected into a confined crossflow. The numerical code is of the TEACH type with hybrid numerics; it uses the power-law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. From the results of the numerical calculations, an existing empirical model for the temperature field downstream of single and multiple rows of jets injected into a straight rectangular duct was extended to model the effects of curvature. Temperature distributions, calculated with both the numerical and empirical models, are presented to show the effects of radius of curvature and inner and outer wall injection for single and opposed rows of cool dilution jets injected into a hot mainstream flow.
Applicability of Similarity Principles to Structural Models
NASA Technical Reports Server (NTRS)
Goodier, J N; Thomson, W T
1944-01-01
A systematic account is given in part I of the use of dimensional analysis in constructing similarity conditions for models and structures. The analysis covers large deflections, buckling, plastic behavior, and materials with nonlinear stress-strain characteristics, as well as the simpler structural problems. (author)
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
Potential formulation of sleep dynamics
NASA Astrophysics Data System (ADS)
Phillips, A. J. K.; Robinson, P. A.
2009-02-01
A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.
Design study of general aviation collision avoidance system
NASA Technical Reports Server (NTRS)
Bates, M. R.; Moore, L. D.; Scott, W. V.
1972-01-01
The selection and design of a time/frequency collision avoidance system for use in general aviation aircraft is discussed. The modifications to airline transport collision avoidance equipment which were made to produce the simpler general aviation system are described. The threat determination capabilities and operating principles of the general aviation system are illustrated.
The Purpose of Analytical Models from the Perspective of a Data Provider.
ERIC Educational Resources Information Center
Sheehan, Bernard S.
The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…
Autonomous Guidance of Agile Small-scale Rotorcraft
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Feron, Eric
2004-01-01
This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, B.L.
DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
The application of CFD to the modelling of fires in complex geometries
NASA Astrophysics Data System (ADS)
Burns, A. D.; Clarke, D. S.; Guilbert, P.; Jones, I. P.; Simcox, S.; Wilkes, N. S.
The application of Computational Fluid Dynamics (CFD) to industrial safety is a challenging activity. In particular it involves the interaction of several different physical processes, including turbulence, combustion, radiation, buoyancy, compressible flow and shock waves in complex three-dimensional geometries. In addition, there may be multi-phase effects arising, for example, from sprinkler systems for extinguishing fires. The FLOW3D software (1-3) from Computational Fluid Dynamics Services (CFDS) is in widespread use in industrial safety problems, both within AEA Technology, and also by CFDS's commercial customers, for example references (4-13). This paper discusses some other applications of FLOW3D to safety problems. These applications illustrate the coupling of the gas flows with radiation models and combustion models, particularly for complex geometries where simpler radiation models are not applicable.
Mean field games with congestion
NASA Astrophysics Data System (ADS)
Achdou, Yves; Porretta, Alessio
2018-03-01
We consider a class of systems of time dependent partial differential equations which arise in mean field type models with congestion. The systems couple a backward viscous Hamilton-Jacobi equation and a forward Kolmogorov equation both posed in $(0,T)\\times (\\mathbb{R}^N /\\mathbb{Z}^N)$. Because of congestion and by contrast with simpler cases, the latter system can never be seen as the optimality conditions of an optimal control problem driven by a partial differential equation. The Hamiltonian vanishes as the density tends to $+\\infty$ and may not even be defined in the regions where the density is zero. After giving a suitable definition of weak solutions, we prove the existence and uniqueness results of the latter under rather general assumptions. No restriction is made on the horizon $T$.
Light scattering by marine algae: two-layer spherical and nonspherical models
NASA Astrophysics Data System (ADS)
Quirantes, Arturo; Bernard, Stewart
2004-11-01
Light scattering properties of algae-like particles are modeled using the T-matrix for coated scatterers. Two basic geometries have been considered: off-centered coated spheres and centered spheroids. Extinction, scattering and absorption efficiencies, plus scattering in the backward plane, are compared to simpler models like homogeneous (Mie) and coated (Aden-Kerker) models. The anomalous diffraction approximation (ADA), of widespread use in the oceanographic light-scattering community, has also been used as a first approximation, for both homogeneous and coated spheres. T-matrix calculations show that some light scattering values, such as extinction and scattering efficiencies, have little dependence on particle shape, thus reinforcing the view that simpler (Mie, Aden-Kerker) models can be applied to infer refractive index (RI) data from absorption curves. The backscattering efficiency, on the other hand, is quite sensitive to shape. This calls into question the use of light scattering techniques where the phase function plays a pivotal role, and can help explain the observed discrepancy between theoretical and experimental values of the backscattering coefficient in observed in oceanic studies.
A comparative study of four major approaches to predicting ATES performance
NASA Astrophysics Data System (ADS)
Doughty, C.; Buscheck, T. A.; Bodvarsson, G. S.; Tsang, C. F.
1982-09-01
The International Energy Agency test problem involving Aquifer Thermal Energy Storage was solved using four approaches: the numerical model PF (formerly CCC), the simpler numerical model SFM, and two graphical characterization schemes. Each of the four techniques, with the advantages and disadvantages of each, are discussed.
ERIC Educational Resources Information Center
Fan, Yi; Lance, Charles E.
2017-01-01
The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…
A Toy Model of Quantum Electrodynamics in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2008-01-01
We present a toy model of quantum electrodynamics (QED) in (1 + 1) dimensions. The QED model is much simpler than QED in (3 + 1) dimensions but exhibits many of the same physical phenomena, and serves as a pedagogical introduction to both QED and quantum field theory in general. We show how the QED model can be derived by quantizing a toy model of…
NASA Technical Reports Server (NTRS)
Franklin, James A.
1997-01-01
This report describes revisions to a simulation model that was developed for use in piloted evaluations of takeoff, transition, hover, and landing characteristics of an advanced short takeoff and vertical landing lift fan fighter aircraft. These revisions have been made to the flight/propulsion control system, head-up display, and propulsion system to reflect recent flight and simulation experience with short takeoff and vertical landing operations. They include nonlinear inverse control laws in all axes (eliminating earlier versions with state rate feedback), throttle scaling laws for flightpath and thrust command, control selector commands apportioned based on relative effectiveness of the individual controls, lateral guidance algorithms that provide more flexibility for terminal area operations, and a simpler representation of the propulsion system. The model includes modes tailored to the phases of the aircraft's operation, with several response types which are coupled to the aircraft's aerodynamic and propulsion system effectors through a control selector tailored to the propulsion system. Head-up display modes for approach and hover are integrated with the corresponding control modes. Propulsion system components modeled include a remote lift fan and a lift-cruise engine. Their static performance and dynamic responses are represented by the model. A separate report describes the subsonic, power-off aerodynamics and jet induced aerodynamics in hover and forward flight, including ground effects.
Emerald: an object-based language for distributed programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, N.C.
1987-01-01
Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less
Computerized power supply analysis: State equation generation and terminal models
NASA Technical Reports Server (NTRS)
Garrett, S. J.
1978-01-01
To aid engineers that design power supply systems two analysis tools that can be used with the state equation analysis package were developed. These tools include integration routines that start with the description of a power supply in state equation form and yield analytical results. The first tool uses a computer program that works with the SUPER SCEPTRE circuit analysis program and prints the state equation for an electrical network. The state equations developed automatically by the computer program are used to develop an algorithm for reducing the number of state variables required to describe an electrical network. In this way a second tool is obtained in which the order of the network is reduced and a simpler terminal model is obtained.
Option volatility and the acceleration Lagrangian
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang
2014-01-01
This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Lyu, Zhe; Whitman, William B
2017-01-01
Current evolutionary models suggest that Eukaryotes originated from within Archaea instead of being a sister lineage. To test this model of ancient evolution, we review recent studies and compare the three major information processing subsystems of replication, transcription and translation in the Archaea and Eukaryotes. Our hypothesis is that if the Eukaryotes arose within the archaeal radiation, their information processing systems will appear to be one of kind and not wholly original. Within the Eukaryotes, the mammalian or human systems are emphasized because of their importance in understanding health. Biochemical as well as genetic studies provide strong evidence for the functional similarity of archaeal homologs to the mammalian information processing system and their dissimilarity to the bacterial systems. In many independent instances, a simple archaeal system is functionally equivalent to more elaborate eukaryotic homologs, suggesting that evolution of complexity is likely an central feature of the eukaryotic information processing system. Because fewer components are often involved, biochemical characterizations of the archaeal systems are often easier to interpret. Similarly, the archaeal cell provides a genetically and metabolically simpler background, enabling convenient studies on the complex information processing system. Therefore, Archaea could serve as a parsimonious and tractable host for studying human diseases that arise in the information processing systems.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
The kinetic stabilizer: a route to simpler tandem mirror systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, R F
2001-02-02
As we enter the new millennium there is a growing urgency to address the issue of finding long-range solutions to the world's energy needs. Fusion offers such a solution, provided economically viable means can be found to extract useful energy from fusion reactions. While the magnetic confinement approach to fusion has a long and productive history, to date the mainline approaches to magnetic confinement, namely closed systems such as the tokamak, appear to many as being too large and complex to be acceptable economically, despite the impressive progress that has made toward the achievement of fusion-relevant confinement parameters. Thus theremore » is a growing feeling that it is imperative to search for new and simpler approaches to magnetic fusion, ones that might lead to smaller and more economically attractive fusion power plants.« less
Enhanced polyhydroxyalkanoate production from organic wastes via process control.
Vargas, Alejandro; Montaño, Liliana; Amaya, Rodolfo
2014-03-01
This work explores the use of a model-based control scheme to enhance the productivity of polyhroxyalkanoate (PHA) production in a mixed culture two-stage system fed with synthetic wastewater. The controller supplies pulses of substrate while regulating the dissolved oxygen (DO) concentration and uses the data to fit a dynamic mathematical model, which in turn is used to predict the time until the next pulse addition. Experiments in a bench scale system first determined the optimal DO set-point and initial substrate concentration. Then the proposed feedback control strategy was compared with a simpler empiric algorithm. The results show that a substrate conversion rate of 1.370±0.598mgPHA/mgCOD/d was achieved. The proposed strategy can also indicate when to stop the accumulation of PHA upon saturation, which occurred with a PHA content of 71.0±7.2wt.%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Application of powder densification models to the consolidation processing of composites
NASA Technical Reports Server (NTRS)
Wadley, H. N. G.; Elzey, D. M.
1991-01-01
Unidirectional fiber reinforced metal matrix composite tapes (containing a single layer of parallel fibers) can now be produced by plasma deposition. These tapes can be stacked and subjected to a thermomechanical treatment that results in a fully dense near net shape component. The mechanisms by which this consolidation step occurs are explored, and models to predict the effect of different thermomechanical conditions (during consolidation) upon the kinetics of densification are developed. The approach is based upon a methodology developed by Ashby and others for the simpler problem of HIP of spherical powders. The complex problem is devided into six, much simpler, subproblems, and then their predicted contributions are added to densification. The initial problem decomposition is to treat the two extreme geometries encountered (contact deformation occurring between foils and shrinkage of isolated, internal pores). Deformation of these two geometries is modelled for plastic, power law creep and diffusional flow. The results are reported in the form of a densification map.
Autoclass: An automatic classification system
NASA Technical Reports Server (NTRS)
Stutz, John; Cheeseman, Peter; Hanson, Robin
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.
Using input command pre-shaping to suppress multiple mode vibration
NASA Technical Reports Server (NTRS)
Hyde, James M.; Seering, Warren P.
1990-01-01
Spacecraft, space-borne robotic systems, and manufacturing equipment often utilize lightweight materials and configurations that give rise to vibration problems. Prior research has led to the development of input command pre-shapers that can significantly reduce residual vibration. These shapers exhibit marked insensitivity to errors in natural frequency estimates and can be combined to minimize vibration at more than one frequency. This paper presents a method for the development of multiple mode input shapers which are simpler to implement than previous designs and produce smaller system response delays. The new technique involves the solution of a group of simultaneous non-linear impulse constraint equations. The resulting shapers were tested on a model of MACE, an MIT/NASA experimental flexible structure.
NASA Technical Reports Server (NTRS)
Haimes, Robert; Follen, Gregory J.
1998-01-01
CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Photometric functions for photoclinometry and other applications
McEwen, A.S.
1991-01-01
Least-squared fits to the brightness profiles across a disk or "limb darkening" described by Hapke's photometric function are found for the simpler Minnaert and lunar-Lambert functions. The simpler functions are needed to reduce the number of unknown parameters in photoclinometry, especially to distinguish the brightness variations of the surface materials from that due to the resolved topography. The limb darkening varies with the Hapke parameters for macroscopic roughness (??), the single-scattering albedo (w), and the asymmetry factor of the particle phase function (g). Both of the simpler functions generally provide good matches to the limb darkening described by Hapke's function, but the lunar-Lambert function is superior when viewing angles are high and when (??) is less than 30??. Although a nonunique solution for the Minnaert function at high phase angles has been described for smooth surfaces, the discrepancy decreases with increasing (??) and virtually disappears when (??) reaches 30?? to 40??. The variation in limb darkening with w and g, pronounced for smooth surfaces, is reduced or eliminated when the Hapke parameters are in the range typical of most planetary surfaces; this result simplifies the problem of photoclinometry across terrains with variable surface materials. The Minnaert or lunar-Lambert fits to published Hapke models will give photoclinometric solutions that are very similar (>1?? slope discrepancy) to the Hapke-function solutions for nearly all of the bodies and terrains thus far modeled by Hapke's function. ?? 1991.
The interplay between obesity and cancer: a fly view
2016-01-01
ABSTRACT Accumulating epidemiological evidence indicates a strong clinical association between obesity and an increased risk of cancer. The global pandemic of obesity indicates a public health trend towards a substantial increase in cancer incidence and mortality. However, the mechanisms that link obesity to cancer remain incompletely understood. The fruit fly Drosophila melanogaster has been increasingly used to model an expanding spectrum of human diseases. Fly models provide a genetically simpler system that is ideal for use as a first step towards dissecting disease interactions. Recently, the combining of fly models of diet-induced obesity with models of cancer has provided a novel model system in which to study the biological mechanisms that underlie the connections between obesity and cancer. In this Review, I summarize recent advances, made using Drosophila, in our understanding of the interplay between diet, obesity, insulin resistance and cancer. I also discuss how the biological mechanisms and therapeutic targets that have been identified in fly studies could be utilized to develop preventative interventions and treatment strategies for obesity-associated cancers. PMID:27604692
Strategies to intervene on causal systems are adaptively selected.
Coenen, Anna; Rehder, Bob; Gureckis, Todd M
2015-06-01
How do people choose interventions to learn about causal systems? Here, we considered two possibilities. First, we test an information sampling model, information gain, which values interventions that can discriminate between a learner's hypotheses (i.e. possible causal structures). We compare this discriminatory model to a positive testing strategy that instead aims to confirm individual hypotheses. Experiment 1 shows that individual behavior is described best by a mixture of these two alternatives. In Experiment 2 we find that people are able to adaptively alter their behavior and adopt the discriminatory model more often after experiencing that the confirmatory strategy leads to a subjective performance decrement. In Experiment 3, time pressure leads to the opposite effect of inducing a change towards the simpler positive testing strategy. These findings suggest that there is no single strategy that describes how intervention decisions are made. Instead, people select strategies in an adaptive fashion that trades off their expected performance and cognitive effort. Copyright © 2015 Elsevier Inc. All rights reserved.
Aeroelastic analysis of wind energy conversion systems
NASA Technical Reports Server (NTRS)
Dugundji, J.
1978-01-01
An aeroelastic investigation of horizontal axis wind turbines is described. The study is divided into two simpler areas; (1) the aeroelastic stability of a single blade on a rigid tower; and (2) the mechanical vibrations of the rotor system on a flexible tower. Some resulting instabilities and forced vibration behavior are described.
Kollikkathara, Naushad; Feng, Huan; Yu, Danlin
2010-11-01
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to form a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process. Copyright © 2010 Elsevier Ltd. All rights reserved.
Artificial photosynthesis: biomimetic approaches to solar energy conversion and storage.
Kalyanasundaram, K; Graetzel, M
2010-06-01
Using sun as the energy source, natural photosynthesis carries out a number of useful reactions such as oxidation of water to molecular oxygen and fixation of CO(2) in the form of sugars. These are achieved through a series of light-induced multi-electron-transfer reactions involving chlorophylls in a special arrangement and several other species including specific enzymes. Artificial photosynthesis attempts to reconstruct these key processes in simpler model systems such that solar energy and abundant natural resources can be used to generate high energy fuels and restrict the amount of CO(2) in the atmosphere. Details of few model catalytic systems that lead to clean oxidation of water to H(2) and O(2), photoelectrochemical solar cells for the direct conversion of sunlight to electricity, solar cells for total decomposition of water and catalytic systems for fixation of CO(2) to fuels such as methanol and methane are reviewed here. Copyright 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollikkathara, Naushad, E-mail: naushadkp@gmail.co; Feng Huan; Yu Danlin
2010-11-15
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to formmore » a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process.« less
On fluttering modes for aircraft wing model in subsonic air flow.
Shubov, Marianna A
2014-12-08
The paper deals with unstable aeroelastic modes for aircraft wing model in subsonic, incompressible, inviscid air flow. In recent author's papers asymptotic, spectral and stability analysis of the model has been carried out. The model is governed by a system of two coupled integrodifferential equations and a two-parameter family of boundary conditions modelling action of self-straining actuators. The Laplace transform of the solution is given in terms of the 'generalized resolvent operator', which is a meromorphic operator-valued function of the spectral parameter λ, whose poles are called the aeroelastic modes. The residues at these poles are constructed from the corresponding mode shapes. The spectral characteristics of the model are asymptotically close to the ones of a simpler system, which is called the reduced model. For the reduced model, the following result is shown: for each value of subsonic speed, there exists a radius such that all aeroelastic modes located outside the circle of this radius centred at zero are stable. Unstable modes, whose number is always finite, can occur only inside this 'circle of instability'. Explicit estimate of the 'instability radius' in terms of model parameters is given.
On fluttering modes for aircraft wing model in subsonic air flow
Shubov, Marianna A.
2014-01-01
The paper deals with unstable aeroelastic modes for aircraft wing model in subsonic, incompressible, inviscid air flow. In recent author’s papers asymptotic, spectral and stability analysis of the model has been carried out. The model is governed by a system of two coupled integrodifferential equations and a two-parameter family of boundary conditions modelling action of self-straining actuators. The Laplace transform of the solution is given in terms of the ‘generalized resolvent operator’, which is a meromorphic operator-valued function of the spectral parameter λ, whose poles are called the aeroelastic modes. The residues at these poles are constructed from the corresponding mode shapes. The spectral characteristics of the model are asymptotically close to the ones of a simpler system, which is called the reduced model. For the reduced model, the following result is shown: for each value of subsonic speed, there exists a radius such that all aeroelastic modes located outside the circle of this radius centred at zero are stable. Unstable modes, whose number is always finite, can occur only inside this ‘circle of instability’. Explicit estimate of the ‘instability radius’ in terms of model parameters is given. PMID:25484610
A Brief Review of Elasticity and Viscoelasticity
2010-05-27
through electromagnetic or acoustic means. Creating a model that accurately describes these Rayleigh waves is key to modeling and understanding the...technology to be feasible, a mathematical model that describes the propagation of the acoustic wave from the stenosis to the chest wall will be necessary...viscoelastic model is simpler to use than poroelastic models but yields similar results for a wide range of soils and dynamic 30 loadings. In addition
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Preliminary report on electromagnetic model studies
Frischknecht, F.C.; Mangan, G.B.
1960-01-01
More than 70 resopnse curves for various models have been obtained using the slingram and turam electromagnetic methods. Results show that for the slingram method, horizontal co-planar coils are usually more sensitive than vertical, co-axial or vertical, co-planar coils. The shape of the anomaly usually is simpler for the vertical coils.
ERIC Educational Resources Information Center
Rea, Shane L.; Graham, Brett H.; Nakamaru-Ogiso, Eiko; Kar, Adwitiya; Falk, Marni J.
2010-01-01
The extensive conservation of mitochondrial structure, composition, and function across evolution offers a unique opportunity to expand our understanding of human mitochondrial biology and disease. By investigating the biology of much simpler model organisms, it is often possible to answer questions that are unreachable at the clinical level.…
Vindbjerg, Erik; Carlsson, Jessica; Mortensen, Erik Lykke; Elklit, Ask; Makransky, Guido
2016-09-05
Refugees are known to have high rates of post-traumatic stress disorder (PTSD). Although recent years have seen an increase in the number of refugees from Arabic speaking countries in the Middle East, no study so far has validated the construct of PTSD in an Arabic speaking sample of refugees. Responses to the Harvard Trauma Questionnaire (HTQ) were obtained from 409 Arabic-speaking refugees diagnosed with PTSD and undergoing treatment in Denmark. Confirmatory factor analysis was used to test and compare five alternative models. All four- and five-factor models provided sufficient fit indices. However, a combination of excessively small clusters, and a case of mistranslation in the official Arabic translation of the HTQ, rendered results two of the models inadmissible. A post hoc analysis revealed that a simpler factor structure is supported, once local dependence is addressed. Overall, the construct of PTSD is supported in this sample of Arabic-speaking refugees. Apart from pursuing maximum fit, future studies may wish to test simpler, potentially more stable models, which allow a more informative analysis of individual items.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be
2015-06-15
We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less
Bifurcation analysis and phase diagram of a spin-string model with buckled states.
Ruiz-Garcia, M; Bonilla, L L; Prados, A
2017-12-01
We analyze a one-dimensional spin-string model, in which string oscillators are linearly coupled to their two nearest neighbors and to Ising spins representing internal degrees of freedom. String-spin coupling induces a long-range ferromagnetic interaction among spins that competes with a spin-spin antiferromagnetic coupling. As a consequence, the complex phase diagram of the system exhibits different flat rippled and buckled states, with first or second order transition lines between states. This complexity translates to the two-dimensional version of the model, whose numerical solution has been recently used to explain qualitatively the rippled to buckled transition observed in scanning tunneling microscopy experiments with suspended graphene sheets. Here we describe in detail the phase diagram of the simpler one-dimensional model and phase stability using bifurcation theory. This gives additional insight into the physical mechanisms underlying the different phases and the behavior observed in experiments.
Motion and force control for multiple cooperative manipulators
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz, Kenneth
1989-01-01
The motion and force control of multiple robot arms manipulating a commonly held object is addressed. A general control paradigm that decouples the motion and force control problems is introduced. For motion control, there are three natural choices: (1) joint torques, (2) arm-tip force vectors, and (3) the acceleration of a generalized coordinate. Choice (1) allows a class of relatively model-independent control laws by exploiting the Hamiltonian structure of the open-loop system; (2) and (3) require the full model information but produce simpler problems. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, the allocation of the desired end-effector control force to the joint actuators can be optimized; otherwise the internal force can be controlled about some set point. It is shown that effective force regulation can be achieved even if little model information is available.
Bifurcation analysis and phase diagram of a spin-string model with buckled states
NASA Astrophysics Data System (ADS)
Ruiz-Garcia, M.; Bonilla, L. L.; Prados, A.
2017-12-01
We analyze a one-dimensional spin-string model, in which string oscillators are linearly coupled to their two nearest neighbors and to Ising spins representing internal degrees of freedom. String-spin coupling induces a long-range ferromagnetic interaction among spins that competes with a spin-spin antiferromagnetic coupling. As a consequence, the complex phase diagram of the system exhibits different flat rippled and buckled states, with first or second order transition lines between states. This complexity translates to the two-dimensional version of the model, whose numerical solution has been recently used to explain qualitatively the rippled to buckled transition observed in scanning tunneling microscopy experiments with suspended graphene sheets. Here we describe in detail the phase diagram of the simpler one-dimensional model and phase stability using bifurcation theory. This gives additional insight into the physical mechanisms underlying the different phases and the behavior observed in experiments.
Sanz, Luis; Alonso, Juan Antonio
2017-12-01
In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.
A stochastic approach to uncertainty in the equations of MHD kinematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward G., E-mail: egphillips@math.umd.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2015-03-01
The magnetohydrodynamic (MHD) kinematics model describes the electromagnetic behavior of an electrically conducting fluid when its hydrodynamic properties are assumed to be known. In particular, the MHD kinematics equations can be used to simulate the magnetic field induced by a given velocity field. While prescribing the velocity field leads to a simpler model than the fully coupled MHD system, this may introduce some epistemic uncertainty into the model. If the velocity of a physical system is not known with certainty, the magnetic field obtained from the model may not be reflective of the magnetic field seen in experiments. Additionally, uncertaintymore » in physical parameters such as the magnetic resistivity may affect the reliability of predictions obtained from this model. By modeling the velocity and the resistivity as random variables in the MHD kinematics model, we seek to quantify the effects of uncertainty in these fields on the induced magnetic field. We develop stochastic expressions for these quantities and investigate their impact within a finite element discretization of the kinematics equations. We obtain mean and variance data through Monte Carlo simulation for several test problems. Toward this end, we develop and test an efficient block preconditioner for the linear systems arising from the discretized equations.« less
Application of dynamic recurrent neural networks in nonlinear system identification
NASA Astrophysics Data System (ADS)
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation
NASA Astrophysics Data System (ADS)
Coleman, Kenneth; Kosson, Robert
1989-07-01
Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.
Finite element analysis of electromagnetic propagation in an absorbing wave guide
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.
1986-01-01
Wave guides play a significant role in microwave space communication systems. The attenuation per unit length of the guide depends on its construction and design frequency range. A finite element Galerkin formulation has been developed to study TM electromagnetic propagation in complex two-dimensional absorbing wave guides. The analysis models the electromagnetic absorptive characteristics of a general wave guide which could be used to determine wall losses or simulate resistive terminations fitted into the ends of a guide. It is believed that the general conclusions drawn by using this simpler two-dimensional geometry will be fundamentally the same for other geometries.
Theoretical model for frequency locking a diode laser with a Faraday cell
NASA Technical Reports Server (NTRS)
Wanninger, P.; Shay, T. M.
1992-01-01
A new method was developed for frequency locking a diode lasers, called 'the Faraday anomalous dispersion optical transmitter (FADOT) laser locking', which is much simpler than other known locking schemes. The FADOT laser locking method uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. The FADOT method is vibration insensitive and exhibits minimal thermal expansion effects. The system has a frequency pull in the range of 443.2 GHz (9 A). The method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters.
Modified-Signed-Digit Optical Computing Using Fan-Out
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Zhou, Shaomin; Yeh, Pochi
1996-01-01
Experimental optical computing system containing optical fan-out elements implements modified signed-digit (MSD) arithmetic and logic. In comparison with previous optical implementations of MSD arithmetic, this one characterized by larger throughput, greater flexibility, and simpler optics.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Theoretical models for duct acoustic propagation and radiation
NASA Technical Reports Server (NTRS)
Eversman, Walter
1991-01-01
The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Critical speeds and forced response solutions for active magnetic bearing turbomachinery, part 2
NASA Technical Reports Server (NTRS)
Rawal, D.; Keesee, J.; Kirk, R. Gordon
1991-01-01
The need for better performance of turbomachinery with active magnetic bearings has necessitated a study of such systems for accurate prediction of their vibrational characteristics. A modification of existing transfer matrix methods for rotor analysis is presented to predict the response of rotor systems with active magnetic bearings. The position of the magnetic bearing sensors is taken into account and the effect of changing sensor position on the vibrational characteristics of the rotor system is studied. The modified algorithm is validated using a simpler Jeffcott model described previously. The effect of changing from a rotating unbalance excitation to a constant excitation in a single plane is also studied. A typical eight stage centrifugal compressor rotor is analyzed using the modified transfer matrix code. The results for a two mass Jeffcott model were presented previously. The results obtained by running this model with the transfer matrix method were compared with the results of the Jeffcott analysis for the purposes of verification. Also included are plots of amplitude versus frequency for the eight stage centrifugal compressor rotor. These plots demonstrate the significant influence that sensor location has on the amplitude and critical frequencies of the rotor system.
BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism
NASA Astrophysics Data System (ADS)
Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.
Disease spreading in real-life networks
NASA Astrophysics Data System (ADS)
Gallos, Lazaros; Argyrakis, Panos
2002-08-01
In recent years the scientific community has shown a vivid interest in the network structure and dynamics of real-life organized systems. Many such systems, covering an extremely wide range of applications, have been recently shown to exhibit scale-free character in their connectivity distribution, meaning that they obey a power law. Modeling of epidemics on lattices and small-world networks suffers from the presence of a critical infection threshold, above which the entire population is infected. For scale-free networks, the original assumption was that the formation of a giant cluster would lead to an epidemic spreading in the same way as in simpler networks. Here we show that modeling epidemics on a scale-free network can greatly improve the predictions on the rate and efficiency of spreading, as compared to lattice models and small-world networks. We also show that the dynamics of a disease are greatly influenced by the underlying population structure. The exact same model can describe a plethora of networks, such as social networks, virus spreading in the Web, rumor spreading, signal transmission etc.
Colorful Revision: Color-Coded Comments Connected to Instruction
ERIC Educational Resources Information Center
Mack, Nancy
2013-01-01
Many teachers have had a favorable response to their experimentation with digital feedback on students' writing. Students much preferred a simpler system of highlighting and commenting in color. After experimentation the author found that this color-coded system was more effective for them and less time-consuming for her. Of course, any system…
Foam flow and liquid films motion: role of the surfactants properties
NASA Astrophysics Data System (ADS)
Cantat, Isabelle
2011-11-01
Liquid foams absorb energy in a much more efficient way than each of its constituents, taken separately. However, the local process at the origin of the energy dissipation is not entirely elucidated yet, and several models may apply, thus making worth local studies on simpler systems. We investigate the motion through a wet tube of transverse soap films, or lamellae, combining local thickness and velocity measurements in the wetting film. For foaming solution with a high dilatational surface modulus, we reveal a zone of several centimeters in length, the dynamic wetting film, which is significantly influenced by a moving lamella. The dependence of this influence length on lamella velocity and wetting film thickness provides an accurate discrimination among several possible surfactants models. In collaboration with B. Dollet.
Transportation finance : Kentucky's structure and national trends
DOT National Transportation Integrated Search
2002-05-01
Studies state Road Fund tax structures, like studies of state General Funds, tend to focus on a state's current tax structure compared to surrounding states and identifying possible tax changes that may make a tax system simpler, more equitable, more...
Rebreathed air as a reference for breath-alcohol testers
DOT National Transportation Integrated Search
1975-01-01
A technique has been devised for a reference measurement of the performance of breath-alcohol measuring instruments directly from the respiratory system. It is shown that this technique is superior and simpler than comparison measurements based on bl...
Modeling of a complex, polar system with a modified Soave-Redlich-Kwong equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturnfield, E.A.; Matherne, J.L.
1988-01-01
It is computationally feasible to use a simple equation of state (like a Redlich-Kwong) to calculate liquid fugacity but the simpler equations work well only for moderately non-ideal systems. More complex equations (like Ghemling-Lui-Prausnitz) predict system behavior more accurately but are much more complicated to use and can require fitting many parameters to data. This paper illustrates success in using a modified Redlich-Kwong to model a complex system including water, hydrogen, sub and supercritical ammonia, and amines. The binary interaction parameter ({Kappa}/sub ij/) of the Soave-Redlich-Kwong equation has been modified to be both asymmetric and temperature dependent. Further, the aimore » constant was determined by fitting vapor pressure data. Predicted model results are compared to literature (example 1) or plant data (examples 2-4) for four systems: 1. The ammonia-water binary over a wide range of pressure and temperature including ammonia above its critical. 2. A multicomponent Vapor-Liquid equilibrium flash tank and condenser containg hydrogen, amonia, water, and other heavier compounds. 3. A multicomponent vapor-liquid equilibrium flash tank containing water, heavier mines, and the amine salts. 4. A Liquid-Liquid-Vapor equilibrium decanter system containing water, ammonia, and an organic chloride.« less
Faraday anomalous dispersion optical tuners
NASA Technical Reports Server (NTRS)
Wanninger, P.; Valdez, E. C.; Shay, T. M.
1992-01-01
Common methods for frequency stabilizing diode lasers systems employ gratings, etalons, optical electric double feedback, atomic resonance, and a Faraday cell with low magnetic field. Our method, the Faraday Anomalous Dispersion Optical Transmitter (FADOT) laser locking, is much simpler than other schemes. The FADOT uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. This method is vibration insensitive, thermal expansion effects are minimal, and the system has a frequency pull in range of 443.2 GHz (9A). Our technique is based on the Faraday anomalous dispersion optical filter. This method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters. We present the first theoretical model for the FADOT and compare the calculations to our experimental results.
Qualitative and numerical investigations of the impact of a novel pathogen on a seabird colony
NASA Astrophysics Data System (ADS)
O'Regan, S. M.; Kelly, T. C.; Korobeinikov, A.; O'Callaghan, M. J. A.; Pokrovskii, A. V.
2008-11-01
Understanding the dynamics of novel pathogens in dense populations is crucial to public and veterinary health as well as wildlife ecology. Seabirds live in crowded colonies numbering several thousands of individuals. The long-term dynamics of avian influenza H5N1 virus in a seabird colony with no existing herd immunity are investigated using sophisticated mathematical techniques. The key characteristics of seabird population biology and the H5N1 virus are incorporated into a Susceptible-Exposed-Infected-Recovered (SEIR) model. Using the theory of integral manifolds, the SEIR model is reduced to a simpler system of two differential equations depending on the infected and recovered populations only, termed the IR model. The results of numerical experiments indicate that the IR model and the SEIR model are in close agreement. Using Lyapunov's direct method, the equilibria of the SEIR and the IR models are proven to be globally asymptotically stable in the positive quadrant.
A new MRI land surface model HAL
NASA Astrophysics Data System (ADS)
Hosaka, M.
2011-12-01
A land surface model HAL is newly developed for MRI-ESM1. It is used for the CMIP simulations. HAL consists of three submodels: SiByl (vegetation), SNOWA (snow) and SOILA (soil) in the current version. It also contains a land coupler LCUP which connects some submodels and an atmospheric model. The vegetation submodel SiByl has surface vegetation processes similar to JMA/SiB (Sato et al. 1987, Hirai et al. 2007). SiByl has 2 vegetation layers (canopy and grass) and calculates heat, moisture, and momentum fluxes between the land surface and the atmosphere. The snow submodel SNOWA can have any number of snow layers and the maximum value is set to 8 for the CMIP5 experiments. Temperature, SWE, density, grain size and the aerosol deposition contents of each layer are predicted. The snow properties including the grain size are predicted due to snow metamorphism processes (Niwano et al., 2011), and the snow albedo is diagnosed from the aerosol mixing ratio, the snow properties and the temperature (Aoki et al., 2011). The soil submodel SOILA can also have any number of soil layers, and is composed of 14 soil layers in the CMIP5 experiments. The temperature of each layer is predicted by solving heat conduction equations. The soil moisture is predicted by solving the Darcy equation, in which hydraulic conductivity depends on the soil moisture. The land coupler LCUP is designed to enable the complicated constructions of the submidels. HAL can include some competing submodels (precise and detailed ones, and simpler ones), and they can run at the same simulations. LCUP enables a 2-step model validation, in which we compare the results of the detailed submodels with the in-situ observation directly at the 1st step, and follows the comparison between them and those of the simpler ones at the 2nd step. When the performances of the detailed ones are good, we can improve the simpler ones by using the detailed ones as reference models.
Modeling, simulation, and analysis of optical remote sensing systems
NASA Technical Reports Server (NTRS)
Kerekes, John Paul; Landgrebe, David A.
1989-01-01
Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.
Comparison of wavefront sensor models for simulation of adaptive optics.
Wu, Zhiwen; Enmark, Anita; Owner-Petersen, Mette; Andersen, Torben
2009-10-26
The new generation of extremely large telescopes will have adaptive optics. Due to the complexity and cost of such systems, it is important to simulate their performance before construction. Most systems planned will have Shack-Hartmann wavefront sensors. Different mathematical models are available for simulation of such wavefront sensors. The choice of wavefront sensor model strongly influences computation time and simulation accuracy. We have studied the influence of three wavefront sensor models on performance calculations for a generic, adaptive optics (AO) system designed for K-band operation of a 42 m telescope. The performance of this AO system has been investigated both for reduced wavelengths and for reduced r(0) in the K band. The telescope AO system was designed for K-band operation, that is both the subaperture size and the actuator pitch were matched to a fixed value of r(0) in the K-band. We find that under certain conditions, such as investigating limiting guide star magnitude for large Strehl-ratios, a full model based on Fraunhofer propagation to the subimages is significantly more accurate. It does however require long computation times. The shortcomings of simpler models based on either direct use of average wavefront tilt over the subapertures for actuator control, or use of the average tilt to move a precalculated point spread function in the subimages are most pronounced for studies of system limitations to operating parameter variations. In the long run, efficient parallelization techniques may be developed to overcome the problem.
Rethinking the logistic approach for population dynamics of mutualistic interactions.
García-Algarra, Javier; Galeano, Javier; Pastor, Juan Manuel; Iriondo, José María; Ramasco, José J
2014-12-21
Mutualistic communities have an internal structure that makes them resilient to external perturbations. Late research has focused on their stability and the topology of the relations between the different organisms to explain the reasons of the system robustness. Much less attention has been invested in analyzing the systems dynamics. The main population models in use are modifications of the r-K formulation of logistic equation with additional terms to account for the benefits produced by the interspecific interactions. These models have shortcomings as the so-called r-K formulation diverges under some conditions. In this work, we introduce a model for population dynamics under mutualism that preserves the original logistic formulation. It is mathematically simpler than the widely used type II models, although it shows similar complexity in terms of fixed points and stability of the dynamics. We perform an analytical stability analysis and numerical simulations to study the model behavior in general interaction scenarios including tests of the resilience of its dynamics under external perturbations. Despite its simplicity, our results indicate that the model dynamics shows an important richness that can be used to gain further insights in the dynamics of mutualistic communities. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
Experimental study and modelling of selenite sorption onto illite and smectite clays.
Missana, T; Alonso, U; García-Gutiérrez, M
2009-06-15
This study provides a large set of experimental selenite sorption data for pure smectite and illite. Similar sorption behavior existed in both clays: linear within a large range of the Se concentrations investigated (from 1x10(-10) to 1x10(-3) M); and independent of ionic strength. Selenite sorption was also analysed in the illite/smectite system with the clays mixed in two different proportions, as follows: (a) 30% illite-70% smectite and (b) 43% illite-57% smectite. The objective of the study was to provide the simplest model possible to fit the experimental data, a model also capable of describing selenite sorption in binary illite/smectite clay systems. Selenite sorption data, separately obtained in the single mineral systems, were modeled using both a one- and a two-site non-electrostatic model that took into account the formation of two complexes at the edge sites of the clay. Although the use of a two-site model slightly improved the fit of data at a pH below 4, the simpler one-site model reproduced satisfactorily all the sorption data from pH 3 to 8. The complexation constants obtained by fitting sorption data of the individual minerals were incorporated into a model to predict the adsorption of selenium in the illite/smectite mixtures; the model's predictions were consistent with the experimental adsorption data.
Adaptation of a general circulation model to ocean dynamics
NASA Technical Reports Server (NTRS)
Turner, R. E.; Rees, T. H.; Woodbury, G. E.
1976-01-01
A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.
NASA Astrophysics Data System (ADS)
Ram Prabhakar, J.; Ragavan, K.
2013-07-01
This article proposes new power management based current control strategy for integrated wind-solar-hydro system equipped with battery storage mechanism. In this control technique, an indirect estimation of load current is done, through energy balance model, DC-link voltage control and droop control. This system features simpler energy management strategy and necessitates few power electronic converters, thereby minimizing the cost of the system. The generation-demand (G-D) management diagram is formulated based on the stochastic weather conditions and demand, which would likely moderate the gap between both. The features of management strategy deploying energy balance model include (1) regulating DC-link voltage within specified tolerances, (2) isolated operation without relying on external electric power transmission network, (3) indirect current control of hydro turbine driven induction generator and (4) seamless transition between grid-connected and off-grid operation modes. Furthermore, structuring of the hybrid system with appropriate selection of control variables enables power sharing among each energy conversion systems and battery storage mechanism. By addressing these intricacies, it is viable to regulate the frequency and voltage of the remote network at load end. The performance of the proposed composite scheme is demonstrated through time-domain simulation in MATLAB/Simulink environment.
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
NASA Technical Reports Server (NTRS)
Powell, John D.; Owens, David; Menzies, Tim
2004-01-01
The difficulty of how to test large systems, such as the one on board a NASA robotic remote explorer (RRE) vehicle, is fundamentally a search issue: the global state space representing all possible has yet to be solved, even after many decades of work. Randomized algorithms have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. In the case study presented here, the LURCH randomized algorithm proved to be adequate to the task of testing a NASA RRE vehicle. LURCH found all the errors found by an earlier analysis of a more complete method (SPIN). Our empirical results are that LURCH can scale to much larger models than standard model checkers like SMV and SPIN. Further, the LURCH analysis was simpler than the SPIN analysis. The simplicity and scalability of LURCH are two compelling reasons for experimenting further with this tool.
The matrix effect in secondary ion mass spectrometry
NASA Astrophysics Data System (ADS)
Seah, M. P.; Shard, A. G.
2018-05-01
Matrix effects in the secondary ion mass spectrometry (SIMS) of selected elemental systems have been analyzed to investigate the applicability of a mathematical description of the matrix effect, called here the charge transfer (CT) model. This model was originally derived for proton exchange and organic positive secondary ions, to characterise the enhancement or suppression of intensities in organic binary systems. In the systems considered in this paper protons are specifically excluded, which enables an assessment of whether the model applies for electrons as well. The present importance is in organic systems but, here we analyse simpler inorganic systems. Matrix effects in elemental systems cannot involve proton transfer if there are no protons present but may be caused by electron transfer and so electron transfer may also be involved in the matrix effects for organic systems. There are general similarities in both the magnitudes of the ion intensities as well as the matrix effects for both positive and negative secondary ions in both systems and so the CT model may be more widely applicable. Published SIMS analyses of binary elemental mixtures are analyzed. The data of Kim et al., for the Pt/Co system, provide, with good precision, data for such a system. This gives evidence for the applicability of the CT model, where electron, rather than proton, transfer is the matrix enhancing and suppressing mechanism. The published data of Prudon et al., for the important Si/Ge system, provides further evidence for the effects for both positive and negative secondary ions and allows rudimentary rules to be developed for the enhancing and suppressing species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
Biological soil crusts (biocrusts) as a model system in community, landscape and ecosystem ecology
Bowker, Matthew A.; Maestre, Fernando T.; Eldridge, David; Belnap, Jayne; Castillo-Monroy, Andrea; Escolar, Cristina; Soliveres, Santiago
2014-01-01
Model systems have had a profound influence on the development of ecological theory and general principles. Compared to alternatives, the most effective models share some combination of the following characteristics: simpler, smaller, faster, general, idiosyncratic or manipulable. We argue that biological soil crusts (biocrusts) have unique combinations of these features that should be more widely exploited in community, landscape and ecosystem ecology. In community ecology, biocrusts are elucidating the importance of biodiversity and spatial pattern for maintaining ecosystem multifunctionality due to their manipulability in experiments. Due to idiosyncrasies in their modes of facilitation and competition, biocrusts have led to new models on the interplay between environmental stress and biotic interactions and on the maintenance of biodiversity by competitive processes. Biocrusts are perhaps one of the best examples of micro-landscapes—real landscapes that are small in size. Although they exhibit varying patch heterogeneity, aggregation, connectivity and fragmentation, like macro-landscapes, they are also compatible with well-replicated experiments (unlike macro-landscapes). In ecosystem ecology, a number of studies are imposing small-scale, low cost manipulations of global change or state factors in biocrust micro-landscapes. The versatility of biocrusts to inform such disparate lines of inquiry suggests that they are an especially useful model system that can enable researchers to see ecological principles more clearly and quickly.
Simplified subsurface modelling: data assimilation and violated model assumptions
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Lange, Natascha; Neuweiler, Insa
2017-04-01
Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.
Structure and Optical Bandgap Relationship of π-Conjugated Systems
Botelho, André Leitão; Shin, Yongwoo; Liu, Jiakai; Lin, Xi
2014-01-01
In bulk heterojunction photovoltaic systems both the open-circuit voltage as well as the short-circuit current, and hence the power conversion efficiency, are dependent on the optical bandgap of the electron-donor material. While first-principles methods are computationally intensive, simpler model Hamiltonian approaches typically suffer from one or more flaws: inability to optimize the geometries for their own input; absence of general, transferable parameters; and poor performance for non-planar systems. We introduce a set of new and revised parameters for the adapted Su-Schrieffer-Heeger (aSSH) Hamiltonian, which is capable of optimizing geometries, along with rules for applying them to any -conjugated system containing C, N, O, or S, including non-planar systems. The predicted optical bandgaps show excellent agreement to UV-vis spectroscopy data points from literature, with a coefficient of determination , a mean error of −0.05 eV, and a mean absolute deviation of 0.16 eV. We use the model to gain insights from PEDOT, fused thiophene polymers, poly-isothianaphthene, copolymers, and pentacene as sources of design rules in the search for low bandgap materials. Using the model as an in-silico design tool, a copolymer of benzodithiophenes along with a small-molecule derivative of pentacene are proposed as optimal donor materials for organic photovoltaics. PMID:24497944
NASA Astrophysics Data System (ADS)
Tsionas, Mike G.; Michaelides, Panayotis G.
2017-09-01
We use a novel Bayesian inference procedure for the Lyapunov exponent in the dynamical system of returns and their unobserved volatility. In the dynamical system, computation of largest Lyapunov exponent by traditional methods is impossible as the stochastic nature has to be taken explicitly into account due to unobserved volatility. We apply the new techniques to daily stock return data for a group of six countries, namely USA, UK, Switzerland, Netherlands, Germany and France, from 2003 to 2014, by means of Sequential Monte Carlo for Bayesian inference. The evidence points to the direction that there is indeed noisy chaos both before and after the recent financial crisis. However, when a much simpler model is examined where the interaction between returns and volatility is not taken into consideration jointly, the hypothesis of chaotic dynamics does not receive much support by the data ("neglected chaos").
Slash pine plantation site index curves for the West Gulf
Stanley J. Zarnoch; D.P. Feduccia
1984-01-01
New slash pine (Pinus elliottii var. elliottii Engelm) plantation site index curves have been developed for the West Gulf. The guide curve is mathematically simpler than other available models, tracks the data well, and is more biologically reasonable outside the range of data.
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
NASA Astrophysics Data System (ADS)
Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed
2017-05-01
In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.
Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.
2013-01-01
We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561
NASA Technical Reports Server (NTRS)
Feher, Kamilo
1993-01-01
The performance and implementation complexity of coherent and of noncoherent QPSK and GMSK modulation/demodulation techniques in a complex mobile satellite systems environment, including large Doppler shift, delay spread, and low C/I, are compared. We demonstrate that for large f(sub d)T(sub b) products, where f(sub d) is the Doppler shift and T(sub b) is the bit duration, noncoherent (discriminator detector or differential demodulation) systems have a lower BER floor than their coherent counterparts. For significant delay spreads, e.g., tau(sub rms) greater than 0.4 T(sub b), and low C/I, coherent systems outperform noncoherent systems. However, the synchronization time of coherent systems is longer than that of noncoherent systems. Spectral efficiency, overall capacity, and related hardware complexity issues of these systems are also analyzed. We demonstrate that coherent systems have a simpler overall architecture (IF filter implementation-cost versus carrier recovery) and are more robust in an RF frequency drift environment. Additionally, the prediction tools, computer simulations, and analysis of coherent systems is simpler. The threshold or capture effect in low C/I interference environment is critical for noncoherent discriminator based systems. We conclude with a comparison of hardware architectures of coherent and of noncoherent systems, including recent trends in commercial VLSI technology and direct baseband to RF transmit, RF to baseband (0-IF) receiver implementation strategies.
NASA Astrophysics Data System (ADS)
Feher, Kamilo
The performance and implementation complexity of coherent and of noncoherent QPSK and GMSK modulation/demodulation techniques in a complex mobile satellite systems environment, including large Doppler shift, delay spread, and low C/I, are compared. We demonstrate that for large f(sub d)T(sub b) products, where f(sub d) is the Doppler shift and T(sub b) is the bit duration, noncoherent (discriminator detector or differential demodulation) systems have a lower BER floor than their coherent counterparts. For significant delay spreads, e.g., tau(sub rms) greater than 0.4 T(sub b), and low C/I, coherent systems outperform noncoherent systems. However, the synchronization time of coherent systems is longer than that of noncoherent systems. Spectral efficiency, overall capacity, and related hardware complexity issues of these systems are also analyzed. We demonstrate that coherent systems have a simpler overall architecture (IF filter implementation-cost versus carrier recovery) and are more robust in an RF frequency drift environment. Additionally, the prediction tools, computer simulations, and analysis of coherent systems is simpler. The threshold or capture effect in low C/I interference environment is critical for noncoherent discriminator based systems. We conclude with a comparison of hardware architectures of coherent and of noncoherent systems, including recent trends in commercial VLSI technology and direct baseband to RF transmit, RF to baseband (0-IF) receiver implementation strategies.
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
Secure Reliable Processing Systems
1984-02-21
be attainable in principle, the more difficult goal is to meet all of the above while still maintaining good performance within the framwork of a well...managing the network, the user sees a conceptually simpler storage facility, composed merely of files, without machine boundaries, replicated copies
Generation of infectious recombinant Adeno-associated virus in Saccharomyces cerevisiae.
Barajas, Daniel; Aponte-Ubillus, Juan Jose; Akeefe, Hassibullah; Cinek, Tomas; Peltier, Joseph; Gold, Daniel
2017-01-01
The yeast Saccharomyces cerevisiae has been successfully employed to establish model systems for a number of viruses. Such model systems are powerful tools to study the virus biology and in particular for the identification and characterization of host factors playing a role in the viral infection cycle. Adeno-associated viruses (AAV) are heavily studied due to their use as gene delivery vectors. AAV relies on other helper viruses for successful replication and on host factors for several aspects of the viral life cycle. However the role of host and helper viral factors is only partially known. Production of recombinant AAV (rAAV) vectors for gene delivery applications depends on knowledge of AAV biology and the limited understanding of host and helper viral factors may be precluding efficient production, particularly in heterologous systems. Model systems in simpler eukaryotes like the yeast S. cerevisiae would be useful tools to identify and study the role of host factors in AAV biology. Here we show that expression of AAV2 viral proteins VP1, VP2, VP3, AAP, Rep78, Rep52 and an ITR-flanked DNA in yeast leads to capsid formation, DNA replication and encapsidation, resulting in formation of infectious particles. Many of the AAV characteristics observed in yeast resemble those in other systems, making it a suitable model system. Future findings in the yeast system could be translatable to other AAV host systems and aid in more efficient production of rAAV vectors.
Methods for improving simulations of biological systems: systemic computation and fractal proteins
Bentley, Peter J.
2009-01-01
Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Zaji, Amir Hossein
2018-03-01
In many hydraulic structures, side weirs have a critical role. Accurately predicting the discharge coefficient is one of the most important stages in the side weir design process. In the present paper, a new high efficient side weir is investigated. To simulate the discharge coefficient of these side weirs, three novel soft computing methods are used. The process includes modeling the discharge coefficient with the hybrid Adaptive Neuro-Fuzzy Interface System (ANFIS) and three optimization algorithms, namely Differential Evaluation (ANFIS-DE), Genetic Algorithm (ANFIS-GA) and Particle Swarm Optimization (ANFIS-PSO). In addition, sensitivity analysis is done to find the most efficient input variables for modeling the discharge coefficient of these types of side weirs. According to the results, the ANFIS method has higher performance when using simpler input variables. In addition, the ANFIS-DE with RMSE of 0.077 has higher performance than the ANFIS-GA and ANFIS-PSO methods with RMSE of 0.079 and 0.096, respectively.
Influence of rubbing on rotor dynamics, part 1
NASA Technical Reports Server (NTRS)
Muszynska, Agnes; Bently, Donald E.; Franklin, Wesley D.; Hayashida, Robert D.; Kingsley, Lori M.; Curry, Arthur E.
1989-01-01
The results of analytical and experimental research on rotor-to-stationary element rubbing in rotating machines are presented. A characterization of physical phenomena associated with rubbing, as well as a literature survey on the subject of rub is given. The experimental results were obtained from two rubbing rotor rigs: one, which dynamically simulates the space shuttle main engine high pressure fuel turbopump (HPFTP), and the second one, much simpler, a two-mode rotor rig, designed for more generic studies on rotor-to-stator rubbing. Two areas were studied: generic rotor-to-stator rub-related dynamic phenomena affecting rotating machine behavior and applications to the space shuttle HPFTP. An outline of application of dynamic stiffness methodology for identification of rotor/bearing system modal parameters is given. The mathematical model of rotor/bearing/seal system under rub condition is given. The computer program was developed to calculate rotor responses. Compared with experimental results the computed results prove an adequacy of the model.
NASA-IGES Translator and Viewer
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Logan, Michael A.
1995-01-01
NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.
Bayesian classification theory
NASA Technical Reports Server (NTRS)
Hanson, Robin; Stutz, John; Cheeseman, Peter
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.
Theoretical study of a molecular turbine.
Perez-Carrasco, R; Sancho, J M
2013-10-01
We present an analytic and stochastic simulation study of a molecular engine working with a flux of particles as a turbine. We focus on the physical observables of velocity, flux, power, and efficiency. The control parameters are the external conservative force and the particle densities. We revise a simpler previous study by using a more realistic model containing multiple equidistant vanes complemented by stochastic simulations of the particles and the turbine. Here we show that the effect of the thermal fluctuations into the flux and the efficiency of these nanometric devices are relevant to the working scale of the system. The stochastic simulations of the Brownian motion of the particles and turbine support the simplified analytical calculations performed.
Thermal and structural analysis of the GOES scan mirror's on orbit performance
NASA Technical Reports Server (NTRS)
Zurmehly, G. E.; Hookman, R. A.
1991-01-01
The on-orbit performance of the GOES satellite's scan mirror has been predicted by means of thermal, structural, and optical models. A simpler-than-conventional thermal model was used to reduce the time required to obtain orbital predictions, and the structural model was used to predict on-earth gravity sag and on-orbit distortions. The transfer of data from the thermal model to the structural model was automated for a given set of thermal nodes and structural grids.
Structure Computation of Quiet Spike[Trademark] Flight-Test Data During Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
System identification or mathematical modeling is used in the aerospace community for development of simulation models for robust control law design. These models are often described as linear time-invariant processes. Nevertheless, it is well known that the underlying process is often nonlinear. The reason for using a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades, the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B Quiet Spike(TradeMark) aeroservoelastic flight-test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description that may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance for the development of robust parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion, which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B Quiet Spike aeroservoelastic flight-test data for several flight conditions that 1) linear models are inefficient for modeling aeroservoelastic data, 2) nonlinear identification provides a parsimonious model description while providing a high percent fit for cross-validated data, and 3) the model structure and parameters vary as the flight condition is altered.
Application of the PJ and NPS evaporation duct models over the South China Sea (SCS) in winter
Yang, Shaobo; Li, Xingfei; Wu, Chao; He, Xin; Zhong, Ying
2017-01-01
The detection of duct height has a significant effect on marine radar or wireless apparatus applications. The paper presents two models to verify the adaptation of evaporation duct models in the SCS in winter. A meteorological gradient instrument used to measure evaporation ducts was fabricated using hydrological and meteorological sensors at different heights. An experiment on the adaptive characteristics of evaporation duct models was carried out over the SCS. The heights of the evaporation ducts were measured by means of log-linear fit, Paulus-Jeske (PJ) and Naval Postgraduate School (NPS) models. The results showed that NPS model offered significant advantages in stability compared with the PJ model. According the collected data computed by the NPS model, the mean deviation (MD) was -1.7 m, and the Standard Deviation (STD) of the MD was 0.8 m compared with the true value. The NPS model may be more suitable for estimating the evaporation duct height in the SCS in winter due to its simpler system characteristics compared with meteorological gradient instruments. PMID:28273113
A model linking clinical workforce skill mix planning to health and health care dynamics.
Masnick, Keith; McDonnell, Geoff
2010-04-30
In an attempt to devise a simpler computable tool to assist workforce planners in determining what might be an appropriate mix of health service skills, our discussion led us to consider the implications of skill mixing and workforce composition beyond the 'stock and flow' approach of much workforce planning activity. Taking a dynamic systems approach, we were able to address the interactions, delays and feedbacks that influence the balance between the major components of health and health care. We linked clinical workforce requirements to clinical workforce workload, taking into account the requisite facilities, technologies, other material resources and their funding to support clinical care microsystems; gave recognition to productivity and quality issues; took cognisance of policies, governance and power concerns in the establishment and operation of the health care system; and, going back to the individual, gave due attention to personal behaviour and biology within the socio-political family environment. We have produced the broad endogenous systems model of health and health care which will enable human resource planners to operate within real world variables. We are now considering the development of simple, computable national versions of this model.
NASA Astrophysics Data System (ADS)
Bush, Drew; Sieber, Renee; Seiler, Gale; Chandler, Mark
2018-04-01
This study with 79 students in Montreal, Quebec, compared the educational use of a National Aeronautics and Space Administration (NASA) global climate model (GCM) to climate education technologies developed for classroom use that included simpler interfaces and processes. The goal was to show how differing climate education technologies succeed and fail at getting students to evolve in their understanding of anthropogenic global climate change (AGCC). Many available climate education technologies aim to convey key AGCC concepts or Earth systems processes; the educational GCM used here aims to teach students the methods and processes of global climate modeling. We hypothesized that challenges to learning about AGCC make authentic technology-enabled inquiry important in developing accurate understandings of not just the issue but how scientists research it. The goal was to determine if student learning trajectories differed between the comparison and treatment groups based on whether each climate education technology allowed authentic scientific research. We trace learning trajectories using pre/post exams, practice quizzes, and written student reflections. To examine the reasons for differing learning trajectories, we discuss student pre/post questionnaires, student exit interviews, and 535 min of recorded classroom video. Students who worked with a GCM demonstrated learning trajectories with larger gains, higher levels of engagement, and a better idea of how climate scientists conduct research. Students who worked with simpler climate education technologies scored lower in the course because of lower levels of engagement with inquiry processes that were perceived to not actually resemble the work of climate scientists.
Relating dynamics of model unentangled, crystallizable polymeric liquids to their local structure
NASA Astrophysics Data System (ADS)
Nguyen, Hong T.; Hoy, Robert S.
We study the liquid-state dynamics of a recently developed, crystallizable bead-spring polymer model. The model possesses a single ground state (NCP, wherein monomers close-pack and chains are nematically aligned) for all finite bending stiffnesses kb, but the solid morphologies formed under cooling vary strongly with kb, varying from NCP to amorphous. We find that systems with kb producing amorphous order are good glass-formers exhibiting the classic Vogel-Fulcher slowdown with decreasing temperature T. In contrast, systems with kb producing crystalline solids exhibit a simpler dynamics when kb is small. Larger kb produce more complex dynamics, but these are associated with the existence of an intermediate nematic liquid rather than glassy slowdown. We relate these differences to local, cluster-level structure measured via TCC analyses. Formation propensities and lifetimes of various clusters (associated with amorphous or crystalline order) vary strongly with kb and T. We relate these differences to those measured by the self-intermediate scattering function and other macroscopic measures of dynamics. Our results should aid in understanding the competition between crystallization and glass-formation in synthetic polymers.
Gurarie, David; King, Charles H.
2014-01-01
Mathematical modeling is widely used for predictive analysis of control options for infectious agents. Challenging problems arise for modeling host-parasite systems having complex life-cycles and transmission environments. Macroparasites, like Schistosoma, inhabit highly fragmented habitats that shape their reproductive success and distribution. Overdispersion and mating success are important factors to consider in modeling control options for such systems. Simpler models based on mean worm burden (MWB) formulations do not take these into account and overestimate transmission. Proposed MWB revisions have employed prescribed distributions and mating factor corrections to derive modified MWB models that have qualitatively different equilibria, including ‘breakpoints’ below which the parasite goes to extinction, suggesting the possibility of elimination via long-term mass-treatment control. Despite common use, no one has attempted to validate the scope and hypotheses underlying such MWB approaches. We conducted a systematic analysis of both the classical MWB and more recent “stratified worm burden” (SWB) modeling that accounts for mating and reproductive hurdles (Allee effect). Our analysis reveals some similarities, including breakpoints, between MWB and SWB, but also significant differences between the two types of model. We show the classic MWB has inherent inconsistencies, and propose SWB as a reliable alternative for projection of long-term control outcomes. PMID:25549362
Irregular behavior in an excitatory-inhibitory neuronal network
NASA Astrophysics Data System (ADS)
Park, Choongseok; Terman, David
2010-06-01
Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Food-web complexity, meta-community complexity and community stability.
Mougi, A; Kondoh, M
2016-04-13
What allows interacting, diverse species to coexist in nature has been a central question in ecology, ever since the theoretical prediction that a complex community should be inherently unstable. Although the role of spatiality in species coexistence has been recognized, its application to more complex systems has been less explored. Here, using a meta-community model of food web, we show that meta-community complexity, measured by the number of local food webs and their connectedness, elicits a self-regulating, negative-feedback mechanism and thus stabilizes food-web dynamics. Moreover, the presence of meta-community complexity can give rise to a positive food-web complexity-stability effect. Spatiality may play a more important role in stabilizing dynamics of complex, real food webs than expected from ecological theory based on the models of simpler food webs.
Majorana fermion surface code for universal quantum computation
Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang
2015-12-10
In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less
Task analysis of autonomous on-road driving
NASA Astrophysics Data System (ADS)
Barbera, Anthony J.; Horst, John A.; Schlenoff, Craig I.; Aha, David W.
2004-12-01
The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).
Principal process analysis of biological models.
Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc
2018-06-14
Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.
Nitrogen cycling models and their application to forest harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, D.W.; Dale, V.H.
1986-01-01
The characterization of forest nitrogen- (N-) cycling processes by several N-cycling models (FORCYTE, NITCOMP, FORTNITE, and LINKAGES) is briefly reviewed and evaluated against current knowledge of N cycling in forests. Some important processes (e.g., translocation within trees, N dynamics in decaying leaf litter) appear to be well characterized, whereas others (e.g., N mineralization from soil organic matter, N fixation, N dynamics in decaying wood, nitrification, and nitrate leaching) are poorly characterized, primarily because of a lack of knowledge rather than an oversight by model developers. It is remarkable how well the forest models do work in the absence of datamore » on some key processes. For those systems in which the poorly understood processes could cause major changes in N availability or productivity, the accuracy of model predictions should be examined. However, the development of N-cycling models represents a major step beyond the much simpler, classic conceptual models of forest nutrient cycling developed by early investigators. The new generation of computer models will surely improve as research reveals how key nutrient-cycling processes operate.« less
NASA Astrophysics Data System (ADS)
Meyer, Rena; Engesgaard, Peter; Høyer, Anne-Sophie; Jørgensen, Flemming; Vignoli, Giulio; Sonnenborg, Torben O.
2018-07-01
Low-lying coastal regions are often highly populated, constitute sensitive habitats and are at the same time exposed to challenging hydrological environments due to surface flooding from storm events and saltwater intrusion, which both may affect drinking water supply from shallow and deeper aquifers. Near the Wadden Sea at the border of Southern Denmark and Northern Germany, the hydraulic system (connecting groundwater, river water, and the sea) was altered over centuries (until the 19th century) by e.g. the construction of dikes and drains to prevent flooding and allow agricultural use. Today, massive saltwater intrusions extend up to 20 km inland. In order to understand the regional flow, a methodological approach was developed that combined: (1) a highly-resolved voxel geological model, (2) a ∼1 million node groundwater model with 46 hydrofacies coupled to rivers, drains and the sea, (3) Tikhonov regularization calibration using hydraulic heads and average stream discharges as targets and (4) parameter uncertainty analysis. It is relatively new to use voxel models for constructing geological models that often have been simplified to stacked, pseudo-3D layer geology. The study is therefore one of the first to combine a voxel geological model with state-of-the-art flow calibration techniques. The results show that voxel geological modelling, where lithofacies information are transferred to each volumetric element, is a useful method to preserve 3D geological heterogeneity on a local scale, which is important when distinct geological features such as buried valleys are abundant. Furthermore, it is demonstrated that simpler geological models and simpler calibration methods do not perform as well. The proposed approach is applicable to many other systems, because it combines advanced and flexible geological modelling and flow calibration techniques. This has led to new insights in the regional flow patterns and especially about water cycling in the marsh area near the coast based on the ability to define six predictive scenarios from the linear analysis of parameter uncertainty. The results show that the coastal system near the Danish-German border is mainly controlled by flow in the two aquifers separated by a thick clay layer, and several deep high-permeable buried valleys that connect the sea with the interior and the two aquifers. The drained marsh area acts like a huge regional sink limiting submarine groundwater discharge. With respect to water balance, the greatest sensitivity to parameter uncertainty was observed in the drained marsh area, where some scenarios showed increased flow of sea water into the interior and increased drainage. We speculate that the massive salt water intrusion may be caused by a combination of the preferential pathways provided by the buried valleys, the marsh drainage and relatively high hydraulic conductivities in the two main aquifers as described by one of the scenarios. This is currently under investigation by using a salt water transport model.
Membrane evaporator/sublimator investigation
NASA Technical Reports Server (NTRS)
Elam, J.; Ruder, J.; Strumpf, H.
1974-01-01
Data are presented on a new evaporator/sublimator concept using a hollow fiber membrane unit with a high permeability to liquid water. The aim of the program was to obtain a more reliable, lightweight and simpler Extra Vehicular Life Support System (EVLSS) cooling concept than is currently being used.
NASA Astrophysics Data System (ADS)
Liu, W. Y.; Xu, H. K.; Su, F. F.; Li, Z. Y.; Tian, Ye; Han, Siyuan; Zhao, S. P.
2018-03-01
Superconducting quantum multilevel systems coupled to resonators have recently been considered in some applications such as microwave lasing and high-fidelity quantum logical gates. In this work, using an rf-SQUID type phase qudit coupled to a microwave coplanar waveguide resonator, we study both theoretically and experimentally the energy spectrum of the system when the qudit level spacings are varied around the resonator frequency by changing the magnetic flux applied to the qudit loop. We show that the experimental result can be well described by a theoretical model that extends from the usual two-level Jaynes-Cummings system to the present four-level system. It is also shown that due to the small anharmonicity of the phase device a simplified model capturing the leading state interactions fits the experimental spectra very well. Furthermore we use the Lindblad master equation containing various relaxation and dephasing processes to calculate the level populations in the simpler qutrit-resonator system, which allows a clear understanding of the dynamics of the system under the microwave drive. Our results help to better understand and perform the experiments of coupled multilevel and resonator systems and can be applied in the case of transmon or Xmon qudits having similar anharmonicity to the present phase device.
Velocity-image model for online signature verification.
Khan, Mohammad A U; Niazi, Muhammad Khalid Khan; Khan, Muhammad Aurangzeb
2006-11-01
In general, online signature capturing devices provide outputs in the form of shape and velocity signals. In the past, strokes have been extracted while tracking velocity signal minimas. However, the resulting strokes are larger and complicated in shape and thus make the subsequent job of generating a discriminative template difficult. We propose a new stroke-based algorithm that splits velocity signal into various bands. Based on these bands, strokes are extracted which are smaller and more simpler in nature. Training of our proposed system revealed that low- and high-velocity bands of the signal are unstable, whereas the medium-velocity band can be used for discrimination purposes. Euclidean distances of strokes extracted on the basis of medium velocity band are used for verification purpose. The experiments conducted show improvement in discriminative capability of the proposed stroke-based system.
NASA Astrophysics Data System (ADS)
Sarojkumar, K.; Krishna, S.
2016-08-01
Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.
Geochemistry of the Birch Creek Drainage Basin, Idaho
Swanson, Shawn A.; Rosentreter, Jeffrey J.; Bartholomay, Roy C.; Knobel, LeRoy L.
2003-01-01
The U.S. Survey and Idaho State University, in cooperation with the U.S. Department of Energy, are conducting studies to describe the chemical character of ground water that moves as underflow from drainage basins into the eastern Snake River Plain aquifer (ESRPA) system at and near the Idaho National Engineering and Environmental Laboratory (INEEL) and the effects of these recharge waters on the geochemistry of the ESRPA system. Each of these recharge waters has a hydrochemical character related to geochemical processes, especially water-rock interactions, that occur during migration to the ESRPA. Results of these studies will benefit ongoing and planned geochemical modeling of the ESRPA at the INEEL by providing model input on the hydrochemical character of water from each drainage basin. During 2000, water samples were collected from five wells and one surface-water site in the Birch Creek drainage basin and analyzed for selected inorganic constituents, nutrients, dissolved organic carbon, tritium, measurements of gross alpha and beta radioactivity, and stable isotopes. Four duplicate samples also were collected for quality assurance. Results, which include analyses of samples previously collected from four other sites, in the basin, show that most water from the Birch Creek drainage basin has a calcium-magnesium bicarbonate character. The Birch Creek Valley can be divided roughly into three hydrologic areas. In the northern part, ground water is forced to the surface by a basalt barrier and the sampling sites were either surface water or shallow wells. Water chemistry in this area was characterized by simple evaporation models, simple calcite-carbon dioxide models, or complex models involving carbonate and silicate minerals. The central part of the valley is filled by sedimentary material and the sampling sites were wells that are deeper than those in the northern part. Water chemistry in this area was characterized by simple calcite-dolomite-carbon dioxide models. In the southern part, ground water enters the ESRPA. In this area, the sampling sites were wells with depths and water levels much deeper than those in the northern and central parts of the valley. The calcium and carbon water chemistry in this area was characterized by a simple calcite-carbon dioxide model, but complex calcite-silicate models more accurately accounted for mass transfer in these areas. Throughout the geochemical system, calcite precipitated if it was an active phase in the models. Carbon dioxide either precipitated (outgassed) or dissolved depending on the partial pressure of carbon dioxide in water from the modeled sites. Dolomite was an active phase only in models from the central part of the system. Generally the entire geochemical system could be modeled with either evaporative models, carbonate models, or carbonate-silicate models. In both of the latter types of models, a significant amount of calcite precipitated relative to the mass transfer to and from the other active phases. The amount of calcite precipitated in the more complex models was consistent with the amount of calcite precipitated in the simpler models. This consistency suggests that, although the simpler models can predict calcium and carbon concentrations in Birch Creek Valley ground and surface water, silicate-mineral-based models are required to account for the other constituents. The amount of mass transfer to and from the silicate mineral phases was generally small compared with that in the carbonate phases. It appears that the water chemistry of well USGS 126B represents the chemistry of water recharging the ESRPA by means of underflow from the Birch Creek Valley.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
A Flexible framework for forward and inverse modeling of stormwater control measures
NASA Astrophysics Data System (ADS)
Aflaki, S.; Massoudieh, A.
2016-12-01
Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However, GI model predictions are being relied upon by many municipalities and State/Local agencies to make decisions about grey vs. green infrastructure improvement planning. Adding complexity to GI modeling frameworks may preclude their use in simpler urban planning situations. Therefore, the goal here was to develop a sophisticated, yet flexible tool that could be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media used in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biophysical processes affecting contaminants such as reactions, and particle-associated transport accurately while maintaining a high degree of flexibly to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated. The process-based model framework developed here can be used to model a diverse range of GI practices such as green roof, retention pond, bioretention, infiltration trench, permeable pavement and other custom-designed combinatory systems. Four demonstration applications covering a diverse range of systems will be presented. The example applications include a evaluating hydraulic performance of a complex bioretention system, hydraulic analysis of porous pavement system, flow colloid-facilitated transport, reactive transport and groundwater recharge underneath an infiltration pond and finally reactive transport and bed-sediment interactions in a wetland system will be presented.
Experiences of building a medical data acquisition system based on two-level modeling.
Li, Bei; Li, Jianbin; Lan, Xiaoyun; An, Ying; Gao, Wuqiang; Jiang, Yuqiao
2018-04-01
Compared to traditional software development strategies, the two-level modeling approach is more flexible and applicable to build an information system in the medical domain. However, the standards of two-level modeling such as openEHR appear complex to medical professionals. This study aims to investigate, implement, and improve the two-level modeling approach, and discusses the experience of building a unified data acquisition system for four affiliated university hospitals based on this approach. After the investigation, we simplified the approach of archetype modeling and developed a medical data acquisition system where medical experts can define the metadata for their own specialties by using a visual easy-to-use tool. The medical data acquisition system for multiple centers, clinical specialties, and diseases has been developed, and integrates the functions of metadata modeling, form design, and data acquisition. To date, 93,353 data items and 6,017 categories for 285 specific diseases have been created by medical experts, and over 25,000 patients' information has been collected. OpenEHR is an advanced two-level modeling method for medical data, but its idea to separate domain knowledge and technical concern is not easy to realize. Moreover, it is difficult to reach an agreement on archetype definition. Therefore, we adopted simpler metadata modeling, and employed What-You-See-Is-What-You-Get (WYSIWYG) tools to further improve the usability of the system. Compared with the archetype definition, our approach lowers the difficulty. Nevertheless, to build such a system, every participant should have some knowledge in both medicine and information technology domains, as these interdisciplinary talents are necessary. Copyright © 2018 Elsevier B.V. All rights reserved.
Two improved coherent optical feedback systems for optical information processing
NASA Technical Reports Server (NTRS)
Lee, S. H.; Bartholomew, B.; Cederquist, J.
1976-01-01
Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.
Discretization chaos - Feedback control and transition to chaos
NASA Technical Reports Server (NTRS)
Grantham, Walter J.; Athalye, Amit M.
1990-01-01
Problems in the design of feedback controllers for chaotic dynamical systems are considered theoretically, focusing on two cases where chaos arises only when a nonchaotic continuous-time system is discretized into a simpler discrete-time systems (exponential discretization and pseudo-Euler integration applied to Lotka-Volterra competition and prey-predator systems). Numerical simulation results are presented in extensive graphs and discussed in detail. It is concluded that care must be taken in applying standard dynamical-systems methods to control systems that may be discontinuous or nondifferentiable.
Graphical User Interface for the NASA FLOPS Aircraft Performance and Sizing Code
NASA Technical Reports Server (NTRS)
Lavelle, Thomas M.; Curlett, Brian P.
1994-01-01
XFLOPS is an X-Windows/Motif graphical user interface for the aircraft performance and sizing code FLOPS. This new interface simplifies entering data and analyzing results, thereby reducing analysis time and errors. Data entry is simpler because input windows are used for each of the FLOPS namelists. These windows contain fields to input the variable's values along with help information describing the variable's function. Analyzing results is simpler because output data are displayed rapidly. This is accomplished in two ways. First, because the output file has been indexed, users can view particular sections with the click of a mouse button. Second, because menu picks have been created, users can plot engine and aircraft performance data. In addition, XFLOPS has a built-in help system and complete on-line documentation for FLOPS.
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
A flexible framework for process-based hydraulic and water ...
Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However, GI model predictions are being relied upon by many municipalities and State/Local agencies to make decisions about grey vs. green infrastructure improvement planning. Adding complexity to GI modeling frameworks may preclude their use in simpler urban planning situations. Therefore, the goal here was to develop a sophisticated, yet flexible tool that could be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media used in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biophysical processes affecting contaminants such as reactions, and particle-associated transport accurately while maintaining a high degree of flexibly to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated.Framework Features The process-based model framework developed here can be used to model a diverse range of GI practices such as green roof, retention pond, bioretention, infiltration trench, permeable pavement and
Roehl, Edwin A.; Conrads, Paul
2010-01-01
This is the second of two papers that describe how data mining can aid natural-resource managers with the difficult problem of controlling the interactions between hydrologic and man-made systems. Data mining is a new science that assists scientists in converting large databases into knowledge, and is uniquely able to leverage the large amounts of real-time, multivariate data now being collected for hydrologic systems. Part 1 gives a high-level overview of data mining, and describes several applications that have addressed major water resource issues in South Carolina. This Part 2 paper describes how various data mining methods are integrated to produce predictive models for controlling surface- and groundwater hydraulics and quality. The methods include: - signal processing to remove noise and decompose complex signals into simpler components; - time series clustering that optimally groups hundreds of signals into "classes" that behave similarly for data reduction and (or) divide-and-conquer problem solving; - classification which optimally matches new data to behavioral classes; - artificial neural networks which optimally fit multivariate data to create predictive models; - model response surface visualization that greatly aids in understanding data and physical processes; and, - decision support systems that integrate data, models, and graphics into a single package that is easy to use.
Shape Transformations of Epithelial Shells
Misra, Mahim; Audoly, Basile; Kevrekidis, Ioannis G.; Shvartsman, Stanislav Y.
2016-01-01
Regulated deformations of epithelial sheets are frequently foreshadowed by patterning of their mechanical properties. The connection between patterns of cell properties and the emerging tissue deformations is studied in multiple experimental systems, but the general principles remain poorly understood. For instance, it is in general unclear what determines the direction in which the patterned sheet is going to bend and whether the resulting shape transformation will be discontinuous or smooth. Here these questions are explored computationally, using vertex models of epithelial shells assembled from prismlike cells. In response to rings and patches of apical cell contractility, model epithelia smoothly deform into invaginated or evaginated shapes similar to those observed in embryos and tissue organoids. Most of the observed effects can be captured by a simpler model with polygonal cells, modified to include the effects of the apicobasal polarity and natural curvature of epithelia. Our models can be readily extended to include the effects of multiple constraints and used to describe a wide range of morphogenetic processes. PMID:27074691
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Brian K.; Lewis, Nikole K.; Showman, Adam P.
2012-06-01
We present a new model for Ellipsoidal Variations Induced by a Low-Mass Companion, the EVIL-MC model. We employ several approximations appropriate for planetary systems to substantially increase the computational efficiency of our model relative to more general ellipsoidal variation models and improve upon the accuracy of simpler models. This new approach gives us a unique ability to rapidly and accurately determine planetary system parameters. We use the EVIL-MC model to analyze Kepler Quarter 0-2 (Q0-2) observations of the HAT-P-7 system, an F-type star orbited by a {approx} Jupiter-mass companion. Our analysis corroborates previous estimates of the planet-star mass ratio qmore » = (1.10 {+-} 0.06) Multiplication-Sign 10{sup -3}, and we have revised the planet's dayside brightness temperature to 2680{sup +10}{sub -20} K. We also find a large difference between the day- and nightside planetary flux, with little nightside emission. Preliminary dynamical+radiative modeling of the atmosphere indicates that this result is qualitatively consistent with high altitude absorption of stellar heating. Similar analyses of Kepler and CoRoT photometry of other planets using EVIL-MC will play a key role in providing constraints on the properties of many extrasolar systems, especially given the limited resources for follow-up and characterization of these systems. However, as we highlight, there are important degeneracies between the contributions from ellipsoidal variations and planetary emission and reflection. Consequently, for many of the hottest and brightest Kepler and CoRoT planets, accurate estimates of the planetary emission and reflection, diagnostic of atmospheric heat budgets, will require accurate modeling of the photometric contribution from the stellar ellipsoidal variation.« less
NASA Astrophysics Data System (ADS)
Ernawati; Carnia, E.; Supriatna, A. K.
2018-03-01
Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Mikhalevich, Irina
2017-01-01
Behavioural flexibility is often treated as the gold standard of evidence for more sophisticated or complex forms of animal cognition, such as planning, metacognition and mindreading. However, the evidential link between behavioural flexibility and complex cognition has not been explicitly or systematically defended. Such a defence is particularly pressing because observed flexible behaviours can frequently be explained by putatively simpler cognitive mechanisms. This leaves complex cognition hypotheses open to ‘deflationary’ challenges that are accorded greater evidential weight precisely because they offer putatively simpler explanations of equal explanatory power. This paper challenges the blanket preference for simpler explanations, and shows that once this preference is dispensed with, and the full spectrum of evidence—including evolutionary, ecological and phylogenetic data—is accorded its proper weight, an argument in support of the prevailing assumption that behavioural flexibility can serve as evidence for complex cognitive mechanisms may begin to take shape. An adaptive model of cognitive-behavioural evolution is proposed, according to which the existence of convergent trait–environment clusters in phylogenetically disparate lineages may serve as evidence for the same trait–environment clusters in other lineages. This, in turn, could permit inferences of cognitive complexity in cases of experimental underdetermination, thereby placing the common view that behavioural flexibility can serve as evidence for complex cognition on firmer grounds. PMID:28479981
ERIC Educational Resources Information Center
Armoni, Michal; Gal-Ezer, Judith
2005-01-01
When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…
Comment on ``Spectroscopy of samarium isotopes in the sdg interacting boson model''
NASA Astrophysics Data System (ADS)
Kuyucak, Serdar; Lac, Vi-Sieu
1993-04-01
We point out that the data used in the sdg boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler sd boson model. We present additional data for the Sm isotopes which cannot be explained in the sd model and hence may justify such an extension to the sdg bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper.
Sleep Supports Inhibitory Operant Conditioning Memory in "Aplysia"
ERIC Educational Resources Information Center
Vorster, Albrecht P. A.; Born, Jan
2017-01-01
Sleep supports memory consolidation as shown in mammals and invertebrates such as bees and "Drosophila." Here, we show that sleep's memory function is preserved in "Aplysia californica" with an even simpler nervous system. Animals performed on an inhibitory conditioning task ("learning that a food is inedible") three…
A novel unsplit perfectly matched layer for the second-order acoustic wave equation.
Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan
2014-08-01
When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Tank Remote Repair System Conceptual Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.
2002-12-06
This document describes two conceptual designs for a Tank Remote Repair System to perform leak site repairs of double shell waste tank walls (Types I, II, III, and IIIA) from the annulus space. The first concept uses a magnetic wall crawler and an epoxy patch system and the second concept uses a magnetic wall crawler and a magnetic patch system. The recommended concept uses the magnetic patch system, since it is simpler to deliver, easier to apply, and has a higher probability of stopping an active leak.
NASA Astrophysics Data System (ADS)
Marshall, R. A.
Descriptions are given of the following railgun systems: the fast-capacitor/inductor system, the slow-capacitor/inductance system, the homopolar generator/inductor system, the distributed energy store system, and the flux compressor powered railgun. Attention is also given to the inverse railgun flux compressor, where the piston is driven between the rails along which the inverted chevron armature slides. The development of the railgun since World War II is surveyed. It is noted that the most pressing need at present is for cheaper, simpler energy stores to couple to the railguns.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
NASA Tech Briefs, November 2007
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Wireless Measurement of Contact and Motion Between Contact Surfaces; Wireless Measurement of Rotation and Displacement Rate; Portable Microleak-Detection System; Free-to-Roll Testing of Airplane Models in Wind Tunnels; Cryogenic Shrouds for Testing Thermal-Insulation Panels; Optoelectronic System Measures Distances to Multiple Targets; Tachometers Derived From a Brushless DC Motor; Algorithm-Based Fault Tolerance for Numerical Subroutines; Computational Support for Technology- Investment Decisions; DSN Resource Scheduling; Distributed Operations Planning; Phase-Oriented Gear Systems; Freeze Tape Casting of Functionally Graded Porous Ceramics; Electrophoretic Deposition on Porous Non- Conductors; Two Devices for Removing Sludge From Bioreactor Wastewater; Portable Unit for Metabolic Analysis; Flash Diffusivity Technique Applied to Individual Fibers; System for Thermal Imaging of Hot Moving Objects; Large Solar-Rejection Filter; Improved Readout Scheme for SQUID-Based Thermometry; Error Rates and Channel Capacities in Multipulse PPM; Two Mathematical Models of Nonlinear Vibrations; Simpler Adaptive Selection of Golomb Power-of- Two Codes; VCO PLL Frequency Synthesizers for Spacecraft Transponders; Wide Tuning Capability for Spacecraft Transponders; Adaptive Deadband Synchronization for a Spacecraft Formation; Analysis of Performance of Stereoscopic-Vision Software; Estimating the Inertia Matrix of a Spacecraft; Spatial Coverage Planning for Exploration Robots; and Increasing the Life of a Xenon-Ion Spacecraft Thruster.
Modeling the surface tension of complex, reactive organic-inorganic mixtures
NASA Astrophysics Data System (ADS)
Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye
2013-11-01
Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.
Optimal harvesting for a predator-prey agent-based model using difference equations.
Oremland, Matthew; Laubenbacher, Reinhard
2015-03-01
In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome
O’Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
2015-01-01
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances – in terms of model complexity, model evaluation, and model structure – can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from ‘yet another model’ to doing better science with models. PMID:27158257
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
USDA-ARS?s Scientific Manuscript database
Adding plant diversity to forage systems may help growers deal with increasing fertilizer costs and a more variable climate. Maintaining highly diverse forage mixtures in forage-livestock production is difficult and may warrant a closer reexamination of simpler grass-legume mixtures to achieve simi...
Monitoring the Productivity of Coastal Systems Using PH: When Simpler is Better
The impact of nutrient inputs to the eutrophication of coastal ecosystems has been one of the great themes of coastal ecology. There have been countless studies devoted to quantifying how human sources of nutrients, in particular nitrogen (N), effect coastal water bodies. These s...
Monitoring the Productivity of Coastal Systems Using PH: When Simpler is Better (NEERS)
The impact of nutrient inputs to the eutrophication of coastal ecosystems has been one of the great themes of coastal ecology. There have been countless studies devoted to quantifying how human sources of nutrients, in particular nitrogen (N), effect coastal water bodies. These s...
Monitoring the Productivity of Coastal Systems Using PH: When Simpler is Better.
The impact of nutrient inputs to the eutrophication of coastal ecosystems has been one of the great themes of coastal ecology. There have been countless studies devoted to quantifying how human sources of nutrients, in particular nitrogen (N), effect coastal water bodies. These s...
Analytical methods for the development of Reynolds stress closures in turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.
1990-01-01
Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.
On the heat capacity of elements in WMD regime
NASA Astrophysics Data System (ADS)
Hamel, Sebatien
2014-03-01
Once thought to get simpler with increasing pressure, elemental systems have been discovered to exhibit complex structures and multiple phases at high pressure. For carbon, QMD/PIMC simulations have been performed and the results are guiding alternative modelling methodologies for constructing a carbon equation-of-state covering the warm dense matter regime. One of the main results of our new QMD/PIMC carbon equation of state is that the decay of the ion-thermal specific heat with temperature is much faster than previously expected. An important question is whether this is only found in carbon and not other element. In this presentation, based on QMD calculations for several elements, we explore trends in the transition from condensed matter to warm dense matter regime.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
NASA Astrophysics Data System (ADS)
Ji, Yunguang; Xu, Yangyang; Li, Hongtao; Oklejas, Michael; Xue, Shuqi
2018-01-01
A new type of hydraulic turbocharger energy recovery system was designed and applied in the decarbonisation process by propylene carbonate of a 100k tons ammonia synthesis system firstly in China. Compared with existing energy recovery devices, hydraulic turbocharger energy recovery system runs more smoothly, has lower failure rate, longer service life and greater comprehensive benefits due to its unique structure, simpler adjustment process and better adaptability to fluid fluctuation.
Entropy-based financial asset pricing.
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.
Entropy-Based Financial Asset Pricing
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return – entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy. PMID:25545668
An analytical approach to customer requirement information processing
NASA Astrophysics Data System (ADS)
Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong
2013-11-01
'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.
OAI and NASA's Scientific and Technical Information
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Rocker, JoAnne; Harrison, Terry L.
2002-01-01
The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is an evolving protocol and philosophy regarding interoperability for digital libraries (DLs). Previously, "distributed searching" models were popular for DL interoperability. However, experience has shown distributed searching systems across large numbers of DLs to be difficult to maintain in an Internet environment. The OAI-PMH is a move away from distributed searching, focusing on the arguably simpler model of "metadata harvesting". We detail NASA s involvement in defining and testing the OAI-PMH and experience to date with adapting existing NASA distributed searching DLs (such as the NASA Technical Report Server) to use the OAI-PMH and metadata harvesting. We discuss some of the entirely new DL projects that the OAI-PMH has made possible, such as the Technical Report Interchange project. We explain the strategic importance of the OAI-PMH to the mission of NASA s Scientific and Technical Information Program.
Macrostructure from Microstructure: Generating Whole Systems from Ego Networks
Smith, Jeffrey A.
2014-01-01
This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783
Air Traffic Control Improvement Using Prioritized CSMA
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2001-01-01
Version 7 simulations of the industry-standard network simulation software "OPNET" are presented of two applications of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) and Automatic Dependent Surveillance-Broadcast mode (ADS-B), over VHF Data Link mode 2 (VDL-2). Communication is modeled for air traffic between just three cities. All aircraft are assumed to have the same equipage. The simulation involves Air Traffic Control (ATC) ground stations and 105 aircraft taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. All communication is modeled as unreliable. Collision-less, prioritized carrier sense multiple access (CSMA) is successfully tested. The statistics presented include latency, queue length, and packet loss. This research may show that a communications system simpler than the currently accepted standard envisioned may not only suffice, but also surpass performance of the standard at a lower cost of deployment.
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
Simple, Flexible, Trigonometric Taper Equations
Charles E. Thomas; Bernard R. Parresol
1991-01-01
There have been numerous approaches to modeling stem form in recent decades. The majority have concentrated on the simpler coniferous bole form and have become increasingly complex mathematical expressions. Use of trigonometric equations provides a simple expression of taper that is flexible enough to fit both coniferous and hard-wood bole forms. As an illustration, we...
Who Needs Lewis Structures to Get VSEPR Geometries?
ERIC Educational Resources Information Center
Lindmark, Alan F.
2010-01-01
Teaching the VSEPR (valence shell electron-pair repulsion) model can be a tedious process. Traditionally, Lewis structures are drawn and the number of "electron clouds" (groups) around the central atom are counted and related to the standard VSEPR table of possible geometries. A simpler method to deduce the VSEPR structure without first drawing…
2014-01-01
and 50 kT, to within 30% of first-principles code ( MCNP ) for complicated cities and 10% for simpler cities. 15. SUBJECT TERMS Radiation Transport...Use of MCNP for Dose Calculations .................................................................... 3 2.3 MCNP Open-Field Absorbed Dose...Calculations .................................................. 4 2.4 The MCNP Urban Model
78 FR 16808 - Connect America Fund; High-Cost Universal Service Support
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-19
... to use one regression to generate a single cap on total loop costs for each study area. A single cap.... * * * A preferable, and simpler, approach would be to develop one conditional quantile model for aggregate.... Total universal service support for such carriers was approaching $2 billion annually--more than 40...
Implicit Learning of Recursive Context-Free Grammars
Rohrmeier, Martin; Fu, Qiufang; Dienes, Zoltan
2012-01-01
Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning. PMID:23094021
Managing and capturing the physics of robotic systems
NASA Astrophysics Data System (ADS)
Werfel, Justin
Algorithmic and other theoretical analyses of robotic systems often use a discretized or otherwise idealized framework, while the real world is continuous-valued and noisy. This disconnect can make theoretical work sometimes problematic to apply successfully to real-world systems. One approach to bridging the separation can be to design hardware to take advantage of simple physical effects mechanically, in order to guide elements into a desired set of discrete attracting states. As a result, the system behavior can effectively approximate a discretized formalism, so that proofs based on an idealization remain directly relevant, while control can be made simpler. It is important to note, conversely, that such an approach does not make a physical instantiation unnecessary nor a purely theoretical treatment sufficient. Experiments with hardware in practice always reveal physical effects not originally accounted for in simulation or analytic modeling, which lead to unanticipated results and require nontrivial modifications to control algorithms in order to achieve desired outcomes. I will discuss these points in the context of swarm robotic systems recently developed at the Self-Organizing Systems Research Group at Harvard.
Using GPS, GIS, and Accelerometer Data to Predict Transportation Modes.
Brondeel, Ruben; Pannier, Bruno; Chaix, Basile
2015-12-01
Active transportation is a substantial source of physical activity, which has a positive influence on many health outcomes. A survey of transportation modes for each trip is challenging, time-consuming, and requires substantial financial investments. This study proposes a passive collection method and the prediction of modes at the trip level using random forests. The RECORD GPS study collected real-life trip data from 236 participants over 7 d, including the transportation mode, global positioning system, geographical information systems, and accelerometer data. A prediction model of transportation modes was constructed using the random forests method. Finally, we investigated the performance of models on the basis of a limited number of participants/trips to predict transportation modes for a large number of trips. The full model had a correct prediction rate of 90%. A simpler model of global positioning system explanatory variables combined with geographical information systems variables performed nearly as well. Relatively good predictions could be made using a model based on the 991 trips of the first 30 participants. This study uses real-life data from a large sample set to test a method for predicting transportation modes at the trip level, thereby providing a useful complement to time unit-level prediction methods. By enabling predictions on the basis of a limited number of observations, this method may decrease the workload for participants/researchers and provide relevant trip-level data to investigate relations between transportation and health.
Actuators of 3-element unimorph deformable mirror
NASA Astrophysics Data System (ADS)
Fu, Tianyang; Ning, Yu; Du, Shaojun
2016-10-01
Kinds of wavefront aberrations exist among optical systems because of atmosphere disturbance, device displacement and a variety of thermal effects, which disturb the information of transmitting beam and restrain its energy. Deformable mirror(DM) is designed to adjust these wavefront aberrations. Bimorph DM becomes more popular and more applicable among adaptive optical(AO) systems with advantages in simple structure, low cost and flexible design compared to traditional discrete driving DM. The defocus aberration accounted for a large proportion of all wavefront aberrations, with a simpler surface and larger amplitude than others, so it is very useful to correct the defocus aberration effectively for beam controlling and aberration adjusting of AO system. In this study, we desired on correcting the 3rd and 10th Zernike modes, analyze the characteristic of the 3rd and 10th defocus aberration surface distribution, design 3-element actuators unimorph DM model study on its structure and deformation principle theoretically, design finite element models of different electrode configuration with different ring diameters, analyze and compare effects of different electrode configuration and different fixing mode to DM deformation capacity through COMSOL finite element software, compare fitting efficiency of DM models to the 3rd and 10th Zernike modes. We choose the inhomogeneous electrode distribution model with better result, get the influence function of every electrode and the voltage-PV relationship of the model. This unimorph DM is suitable for the AO system with a mainly defocus aberration.
Simulation model of a gear synchronisation unit for application in a real-time HiL environment
NASA Astrophysics Data System (ADS)
Kirchner, Markus; Eberhard, Peter
2017-05-01
Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.
Evolution and Extinction Dynamics in Rugged Fitness Landscapes
NASA Astrophysics Data System (ADS)
Sibani, Paolo; Brandt, Michael; Alstrøm, Preben
After an introductory section summarizing the paleontological data and some of their theoretical descriptions, we describe the "reset" model and its (in part analytically soluble) mean field version, which have been briefly introduced in Letters.1,2 Macroevolution is considered as a problem of stochastic dynamics in a system with many competing agents. Evolutionary events (speciations and extinctions) are triggered by fitness records found by random exploration of the agents' fitness landscapes. As a consequence, the average fitness in the system increases logarithmically with time, while the rate of extinction steadily decreases. This non-stationary dynamics is studied by numerical simulations and, in a simpler mean field version, analytically. We also consider the effect of externally added "mass" extinctions. The predictions for various quantities of paleontological interest (life-time distribution, distribution of event sizes and behavior of the rate of extinction) are robust and in good agreement with available data.
Multistability in Chua's circuit with two stable node-foci
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, B. C.; Wang, N.; Xu, Q.
2016-04-15
Only using one-stage op-amp based negative impedance converter realization, a simplified Chua's diode with positive outer segment slope is introduced, based on which an improved Chua's circuit realization with more simpler circuit structure is designed. The improved Chua's circuit has identical mathematical model but completely different nonlinearity to the classical Chua's circuit, from which multiple attractors including coexisting point attractors, limit cycle, double-scroll chaotic attractor, or coexisting chaotic spiral attractors are numerically simulated and experimentally captured. Furthermore, with dimensionless Chua's equations, the dynamical properties of the Chua's system are studied including equilibrium and stability, phase portrait, bifurcation diagram, Lyapunov exponentmore » spectrum, and attraction basin. The results indicate that the system has two symmetric stable nonzero node-foci in global adjusting parameter regions and exhibits the unusual and striking dynamical behavior of multiple attractors with multistability.« less
Toric Calabi-Yau threefolds as quantum integrable systems. R-matrix and RTT relations
NASA Astrophysics Data System (ADS)
Awata, Hidetoshi; Kanno, Hiroaki; Mironov, Andrei; Morozov, Alexei; Morozov, Andrey; Ohkubo, Yusuke; Zenkevich, Yegor
2016-10-01
R-matrix is explicitly constructed for simplest representations of the Ding-Iohara-Miki algebra. Calculation is straightforward and significantly simpler than the one through the universal R-matrix used for a similar calculation in the Yangian case by A. Smirnov but less general. We investigate the interplay between the R-matrix structure and the structure of DIM algebra intertwiners, i.e. of refined topological vertices and show that the R-matrix is diagonalized by the action of the spectral duality belonging to the SL(2, ℤ) group of DIM algebra automorphisms. We also construct the T-operators satisfying the RTT relations with the R-matrix from refined amplitudes on resolved conifold. We thus show that topological string theories on the toric Calabi-Yau threefolds can be naturally interpreted as lattice integrable models. Integrals of motion for these systems are related to q-deformation of the reflection matrices of the Liouville/Toda theories.
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
Group 3 Unmanned Aircraft Systems Maintenance Challenges Within The Naval Aviation Enterprise
2017-12-01
cross winds . We again went through the mishap processes and reviewed training and maintenance records. A couple months later, there was a third crash...gas turbine engines powering aircraft with humans on board (DON, 2017). Group 3 unmanned aircraft utilize a sealed fuel system. The tank is filled...aircraft do not use gas turbine engines. They use either rotary Wankle or piston driven engines with much simpler fuel delivery systems such as carburetors
Piezoelectric-hydraulic pump based band brake actuation system for automotive transmission control
NASA Astrophysics Data System (ADS)
Kim, Gi-Woo; Wang, K. W.
2007-04-01
The actuation system of friction elements (such as band brakes) is essential for high quality operations in modern automotive automatic transmissions (in short, ATs). The current band brake actuation system consists of several hydraulic components, including the oil pump, the regulating valve and the control valves. In general, it has been recognized that the current AT band brake actuation system has many limitations. For example, the oil pump and valve body are relatively heavy and complex. Also, the oil pumps induce inherently large drag torque, which affects fuel economy. This research is to overcome these problems of the current system by exploring the utilization of a hybrid type piezo-hydraulic pump device for AT band brake control. This new actuating system integrates a piezo-hydraulic pump to the input of the band brake. Compared with the current systems, this new actuator features much simpler structure, smaller size, and lower weight. This paper describes the development, design and fabrication of the new stand-alone prototype actuator for AT band brake control. An analytical model is developed and validated using experimental data. Performance tests on the hardware and system simulations utilizing the validated model are performed to characterize the new prototype actuator. It is predicted that with increasing of accumulator pressure and driving frequency, the proposed prototype actuating system will satisfy the band brake requirement for AT shift control.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
Rotationally Actuated Prosthetic Hand
NASA Technical Reports Server (NTRS)
Norton, William E.; Belcher, Jewell G., Jr.; Carden, James R.; Vest, Thomas W.
1991-01-01
Prosthetic hand attached to end of remaining part of forearm and to upper arm just above elbow. Pincerlike fingers pushed apart to degree depending on rotation of forearm. Simpler in design, simpler to operate, weighs less, and takes up less space.
Programmable logic construction kits for hyper-real-time neuronal modeling.
Guerrero-Rivera, Ruben; Morrison, Abigail; Diesmann, Markus; Pearce, Tim C
2006-11-01
Programmable logic designs are presented that achieve exact integration of leaky integrate-and-fire soma and dynamical synapse neuronal models and incorporate spike-time dependent plasticity and axonal delays. Highly accurate numerical performance has been achieved by modifying simpler forward-Euler-based circuitry requiring minimal circuit allocation, which, as we show, behaves equivalently to exact integration. These designs have been implemented and simulated at the behavioral and physical device levels, demonstrating close agreement with both numerical and analytical results. By exploiting finely grained parallelism and single clock cycle numerical iteration, these designs achieve simulation speeds at least five orders of magnitude faster than the nervous system, termed here hyper-real-time operation, when deployed on commercially available field-programmable gate array (FPGA) devices. Taken together, our designs form a programmable logic construction kit of commonly used neuronal model elements that supports the building of large and complex architectures of spiking neuron networks for real-time neuromorphic implementation, neurophysiological interfacing, or efficient parameter space investigations.
NASA Astrophysics Data System (ADS)
DiPirro, M.; Fantano, L.; Canavan, E.; Leisawitz, D.; Carter, R.; Florez, A.; Amatucci, E.
2017-09-01
The Origins Space Telescope (OST) concept is one of four NASA Science Mission Directorate, Astrophysics Division, observatory concepts being studied for launch in the mid 2030's. OST's wavelength coverage will be from the midinfrared to the sub-millimeter, 6-600 microns. To enable observations at the zodiacal background limit the telescope must be cooled to about 4 K. Combined with the telescope size (currently the primary is 9 m in diameter) this appears to be a daunting task. However, simple calculations and thermal modeling have shown the cooling power required is met with several currently developed cryocoolers. Further, the telescope thermal architecture is greatly simplified, allowing simpler models, more thermal margin, and higher confidence in the final performance values than previous cold observatories. We will describe design principles to simplify modeling and verification. We will argue that the OST architecture and design principles lower its integration and test time and reduce its ultimate cost.
NASA Technical Reports Server (NTRS)
DiPirro, M.; Fantano, L.; Canavan, E.; Leisawitz, D.; Carter, R.; Florez, A.; Amatucci, E.
2014-01-01
The Origins Space Telescope (OST) concept is one of four NASA Science Mission Directorate, Astrophysics Division, observatory concepts being studied for launch in the mid 2030's. OST's wavelength coverage will be from the midinfrared to the sub-millimeter, 6-600 microns. To enable observations at the zodiacal background limit the telescope must be cooled to about 4 K. Combined with the telescope size (currently the primary is 9 m in diameter) this appears to be a daunting task. However, simple calculations and thermal modeling have shown the cooling power required is met with several currently developed cryocoolers. Further, the telescope thermal architecture is greatly simplified, allowing simpler models, more thermal margin, and higher confidence in the final performance values than previous cold observatories. We will describe design principles to simplify modeling and verification. We will argue that the OST architecture and design principles lower its integration and test time and reduce its ultimate cost.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-05-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-02-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various time scales is discussed.
Real-Time Optimization in Complex Stochastic Environment
2015-06-24
simpler ones, thus addressing scalability and the limited resources of networked wireless devices. This, however, comes at the expense of increased...Maximization of Wireless Sensor Networks with Non-ideal Batteries”, IEEE Trans. on Control of Network Systems, Vol. 1, 1, pp. 86-98, 2014. [27...C.G., “Optimal Energy-Efficient Downlink Transmission Scheduling for Real-Time Wireless Networks ”, subm. to IEEE Trans. on Control of Network Systems
Test Protocol for Room-to-Room Distribution of Outside Air by Residential Ventilation Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barley, C. D.; Anderson, R.; Hendron, B.
2007-12-01
This test and analysis protocol has been developed as a practical approach for measuring outside air distribution in homes. It has been used successfully in field tests and has led to significant insights on ventilation design issues. Performance advantages of more sophisticated ventilation systems over simpler, less-costly designs have been verified, and specific problems, such as airflow short-circuiting, have been identified.
Fatigue crack growth in fiber reinforced plastics
NASA Technical Reports Server (NTRS)
Mandell, J. F.
1979-01-01
Fatigue crack growth in fiber composites occurs by such complex modes as to frustrate efforts at developing comprehensive theories and models. Under certain loading conditions and with certain types of reinforcement, simpler modes of fatigue crack growth are observed. These modes are more amenable to modeling efforts, and the fatigue crack growth rate can be predicted in some cases. Thus, a formula for prediction of ligamented mode fatigue crack growth rate is available.
Matlab-Excel Interface for OpenDSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
The software allows users of the OpenDSS grid modeling software to access their load flow models using a GUI interface developed in MATLAB. The circuit definitions are entered into a Microsoft Excel spreadsheet which makes circuit creation and editing a much simpler process than the basic text-based editors used in the native OpenDSS interface. Plot tools have been developed which can be accessed through a MATLAB GUI once the desired parameters have been simulated.
Probabilistic Modeling and Simulation of Metal Fatigue Life Prediction
2002-09-01
distribution demonstrate the central limit theorem? Obviously not! This is much the same as materials testing. If only NBA basketball stars are...60 near the exit of a NBA locker room. There would obviously be some pseudo-normal distribution with a very small standard deviation. The mean...completed, the investigators must understand how the midgets and the NBA stars will affect the total solution. D. IT IS MUCH SIMPLER TO MODEL THE
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2005-08-01
Decays of tyrosine fluorescence in protein-ligand complexes are described by a model of continuous distribution of fluorescence lifetimes. Resulted analytical power-like decay function provides good fits to highly complex fluorescence kinetics. Moreover, this is a manifestation of so-called Tsallis q-exponential function, which is suitable for description of the systems with long-range interactions, memory effect, as well as with fluctuations of the characteristic lifetime of fluorescence. The proposed decay functions were applied to analysis of fluorescence decays of tyrosine in a protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli (the product of the deoD gene), free in aqueous solution and in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate). The power-like function provides new information about enzyme-ligand complex formation based on the physically justified heterogeneity parameter directly related to the lifetime distribution. A measure of the heterogeneity parameter in the enzyme systems is provided by a variance of fluorescence lifetime distribution. The possible number of deactivation channels and excited state mean lifetime can be easily derived without a priori knowledge of the complexity of studied system. Moreover, proposed model is simpler then traditional multi-exponential one, and better describes heterogeneous nature of studied systems.
Reynolds stress closure modeling in wall-bounded flows
NASA Technical Reports Server (NTRS)
Durbin, Paul A.
1993-01-01
This report describes two projects. Firstly, a Reynolds stress closure for near-wall turbulence is described. It was motivated by the simpler k-epsilon-(v-bar(exp 2)) model described in last year's annual research brief. Direct Numerical Simulation of three-dimensional channel flow shows a curious decrease of the turbulent kinetic energy. The second topic of this report is a model which reproduces this effect. That model is described and used to discuss the relevance of the three dimensional channel flow simulation to swept wing boundary layers.
Comment on Spectroscopy of samarium isotopes in the [ital sdg] interacting boson model''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuyucak, S.; Lac, V.
We point out that the data used in the [ital sdg] boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler [ital sd] boson model. We present additional data for the Sm isotopes which cannot be explained in the [ital sd] model and hence may justify such an extension to the [ital sdg] bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.
2017-07-01
Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.
NASA Astrophysics Data System (ADS)
Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina
2018-03-01
This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.
NASA Astrophysics Data System (ADS)
Jöckel, P.; Sander, R.; Kerkweg, A.; Tost, H.; Lelieveld, J.
2005-02-01
The development of a comprehensive Earth System Model (ESM) to study the interactions between chemical, physical, and biological processes, requires coupling of the different domains (land, ocean, atmosphere, ...). One strategy is to link existing domain-specific models with a universal coupler, i.e. an independent standalone program organizing the communication between other programs. In many cases, however, a much simpler approach is more feasible. We have developed the Modular Earth Submodel System (MESSy). It comprises (1) a modular interface structure to connect to a , (2) an extendable set of such for miscellaneous processes, and (3) a coding standard. MESSy is therefore not a coupler in the classical sense, but exchanges data between a and several within one comprehensive executable. The internal complexity of the is controllable in a transparent and user friendly way. This provides remarkable new possibilities to study feedback mechanisms (by two-way coupling). Note that the MESSy and the coupler approach can be combined. For instance, an atmospheric model implemented according to the MESSy standard could easily be coupled to an ocean model by means of an external coupler. The vision is to ultimately form a comprehensive ESM which includes a large set of submodels, and a base model which contains only a central clock and runtime control. This can be reached stepwise, since each process can be included independently. Starting from an existing model, process submodels can be reimplemented according to the MESSy standard. This procedure guarantees the availability of a state-of-the-art model for scientific applications at any time of the development. In principle, MESSy can be implemented into any kind of model, either global or regional. So far, the MESSy concept has been applied to the general circulation model ECHAM5 and a number of process boxmodels.
Analytic Solutions of the Vector Burgers Equation
NASA Technical Reports Server (NTRS)
Nerney, Steven; Schmahl, Edward J.; Musielak, Z. E.
1996-01-01
The well-known analytical solution of Burgers' equation is extended to curvilinear coordinate systems in three dimensions by a method that is much simpler and more suitable to practical applications than that previously used. The results obtained are applied to incompressible flow with cylindrical symmetry, and also to the decay of an initially linearly increasing wind.
Kraft pulp bleaching and delignification by dikaryons and monokaryons of trametes versicolor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addleman, K.; Archibald, F.
1993-01-01
To reduce the levels of chlorinated lignin residues in effluents from the pulp and paper industry, interest has focused on the white rot basidiomycete fungi. The kraft process, the most common commercial delignification method, produces a dark pulp which is bleached by use of chlorine, chlorine dioxide, and caustic extraction. A dikaryon of Trametes (Coriolus) versicolor has been shown to bleach and delignify kraft pulp, offering a possible alternative to chlorine. A monokaryon strain, if comparable to the effect of the dikaryon, would be a much simpler system for study of mechanisms and genetic munipulation. The researchers compared strains ofmore » both and conclude that the following characteristics justify replacing the parent dikaryon with monokaryon 52J in future work on biobleaching and biological delignification: (1) reduced biomass and slower growth rate; (2)no dark pigment production; (3) superior biological bleaching ability; (4) a simpler system for genetic manipulation and biochemical analysis. The involvement of MnP, but not LP, in pulp bleaching, delignification is strongly suggested. 40 refs., 3 figs., 4 tabs.« less
Protein (multi-)location prediction: utilizing interdependencies via a generative model
Shatkay, Hagit
2015-01-01
Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505
Protein (multi-)location prediction: utilizing interdependencies via a generative model.
Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit
2015-06-15
Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.
Wallace, Rodrick
2018-06-01
Cognition in living entities-and their social groupings or institutional artifacts-is necessarily as complicated as their embedding environments, which, for humans, includes a particularly rich cultural milieu. The asymptotic limit theorems of information and control theories permit construction of a new class of empirical 'regression-like' statistical models for cognitive developmental processes, their dynamics, and modes of dysfunction. Such models may, as have their simpler analogs, prove useful in the study and re-mediation of cognitive failure at and across the scales and levels of organization that constitute and drive the phenomena of life. These new models particularly focus on the roles of sociocultural environment and stress, in a large sense, as both trigger for the failure of the regulation of bio-cognition and as 'riverbanks' determining the channels of pathology, with implications across life-course developmental trajectories. We examine the effects of an embedding cultural milieu and its socioeconomic implementations using the 'lenses' of metabolic optimization, control system theory, and an extension of symmetry-breaking appropriate to information systems. A central implication is that most, if not all, human developmental disorders are fundamentally culture-bound syndromes. This has deep implications for both individual treatment and public health policy.
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
Mode-Locking Behavior of Izhikevich Neuron Under Periodic External Forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, Amirali; Large, Edward
2015-03-01
In this study we obtained the regions of existence of various mode-locked states on the periodic-strength plane, which are called Arnold Tongues, for Izhikevich neurons. The study is based on the new model for neurons by Izhikevich (2003) which is the normal form of Hodgkin-Huxley neuron. This model is much simpler in terms of the dimension of the coupled non-linear differential equations compared to other existing models, but excellent for generating the complex spiking patterns observed in real neurons. Many neurons in the auditory system of the brain must encode amplitude variations of a periodic signal. These neurons under periodic stimulation display rich dynamical states including mode-locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated (AM) sounds can lead to various forms of n : m mode-locked states, similar to mode-locking phenomenon in a LASER resonance cavity. Obtaining Arnold tongues provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. Hence we can describe the construction of harmonic and sub-harmonic responses in the early processing stages of the auditory system, such as the auditory nerve and cochlear nucleus.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
NASA Astrophysics Data System (ADS)
Fang, K.; Shen, C.; Kifer, D.; Yang, X.
2017-12-01
The Soil Moisture Active Passive (SMAP) mission has delivered high-quality and valuable sensing of surface soil moisture since 2015. However, its short time span, coarse resolution, and irregular revisit schedule have limited its use. Utilizing a state-of-the-art deep-in-time neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data using climate forcing, model-simulated moisture, and static physical attributes as inputs. The system removes most of the bias with model simulations and also improves predicted moisture climatology, achieving a testing accuracy of 0.025 to 0.03 in most parts of Continental United States (CONUS). As the first application of LSTM in hydrology, we show that it is more robust than simpler methods in either temporal or spatial extrapolation tests. We also discuss roles of different predictors, the effectiveness of regularization algorithms and impacts of training strategies. With high fidelity to SMAP products, our data can aid various applications including data assimilation, weather forecasting, and soil moisture hindcasting.
Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen
2015-09-01
In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Performance of the NEXT Engineering Model Power Processing Unit
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Hopson, Mark; Todd, Philip C.; Wong, Brian
2007-01-01
The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. An engineering model (EM) power processing unit (PPU) for the NEXT project was designed and fabricated by L-3 Communications under contract with NASA Glenn Research Center (GRC). This modular PPU is capable of processing up from 0.5 to 7.0 kW of output power for the NEXT ion thruster. Its design includes many significant improvements for better performance over the state-of-the-art PPU. The most significant difference is the beam supply which is comprised of six modules and capable of very efficient operation through a wide voltage range because of innovative features like dual controls, module addressing, and a high current mode. The low voltage power supplies are based on elements of the previously validated NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) PPU. The highly modular construction of the PPU resulted in improved manufacturability, simpler scalability, and lower cost. This paper describes the design of the EM PPU and the results of the bench-top performance tests.
Equivalent circuit for the characterization of the resonance mode in piezoelectric systems
NASA Astrophysics Data System (ADS)
Fernández-Afonso, Y.; García-Zaldívar, O.; Calderón-Piñar, F.
2015-12-01
The impedance properties in polarized piezoelectric can be described by electric equivalent circuits. The classic circuit used in the literature to describe real systems is formed by one resistor (R), one inductance (L) and one capacitance C connected in series and one capacity (C0) connected in parallel with the formers. Nevertheless, the equation that describe the resonance and anti-resonance frequencies depends on a complex manner of R, L, C and C0. In this work is proposed a simpler model formed by one inductance (L) and one capacity (C) in series; one capacity (C0) in parallel; one resistor (RP) in parallel and one resistor (RS) in series with other components. Unlike the traditional circuit, the equivalent circuit elements in the proposed model can be simply determined by knowing the experimental values of the resonance frequency fr, anti-resonance frequency fa, impedance module at resonance frequency |Zr|, impedance module at anti-resonance frequency |Za| and low frequency capacitance C0, without fitting the impedance experimental data to the obtained equation.
Di Paola, Cono; P. Brodholt, John
2016-01-01
Knowledge of the melting properties of materials, especially at extreme pressure conditions, represents a long-standing scientific challenge. For instance, there is currently considerable uncertainty over the melting temperatures of the high-pressure mantle mineral, bridgmanite (MgSiO3-perovskite), with current estimates of the melting T at the base of the mantle ranging from 4800 K to 8000 K. The difficulty with experimentally measuring high pressure melting temperatures has motivated the use of ab initio methods, however, melting is a complex multi-scale phenomenon and the timescale for melting can be prohibitively long. Here we show that a combination of empirical and ab-initio molecular dynamics calculations can be used to successfully predict the melting point of multicomponent systems, such as MgSiO3 perovskite. We predict the correct low-pressure melting T, and at high-pressure we show that the melting temperature is only 5000 K at 120 GPa, a value lower than nearly all previous estimates. In addition, we believe that this strategy is of general applicability and therefore suitable for any system under physical conditions where simpler models fail. PMID:27444854
Modelling and control of a microgrid including photovoltaic and wind generation
NASA Astrophysics Data System (ADS)
Hussain, Mohammed Touseef
Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.
Robust control of accelerators
NASA Astrophysics Data System (ADS)
Joel, W.; Johnson, D.; Chaouki, Abdallah T.
1991-07-01
The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.
AgMIP: Next Generation Models and Assessments
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2014-12-01
Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6 that involves the key modeling groups from around the world including North America, Europe, South America, Sub-Saharan Africa, South Asia, East Asia, and Australia and Oceania. This community process will lead to mutually agreed protocols for coordinated global and regional assessments.
A Classroom Entry and Exit Game of Supply with Price-Taking Firms
ERIC Educational Resources Information Center
Cheung, Stephen L.
2005-01-01
The author describes a classroom game demonstrating the process of adjustment to long-run equilibrium in a market consisting of price-taking firms. This game unites and extends key insights from several simpler games in a framework more consistent with the standard textbook model of a competitive industry. Because firms have increasing marginal…
General Relativity in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2008-01-01
We describe a theory of gravity in (1 + 1) dimensions that can be thought of as a toy model of general relativity. The theory should be a useful pedagogical tool, because it is mathematically much simpler than general relativity but shares much of the same conceptual structure; in particular, it gives a simple illustration of how gravity arises…
ERIC Educational Resources Information Center
Bush, Drew; Sieber, Renee; Seiler, Gale; Chandler, Mark
2018-01-01
This study with 79 students in Montreal, Quebec, compared the educational use of a National Aeronautics and Space Administration (NASA) global climate model (GCM) to climate education technologies developed for classroom use that included simpler interfaces and processes. The goal was to show how differing climate education technologies succeed…
Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students
ERIC Educational Resources Information Center
Nugent, Rebecca; Ayers, Elizabeth; Dean, Nema
2009-01-01
In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach:…
Data Synchronization Discrepancies in a Formation Flight Control System
NASA Technical Reports Server (NTRS)
Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)
2001-01-01
Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.
NASA Technical Reports Server (NTRS)
Thorpe, Douglas G.
1991-01-01
An operation and schedule enhancement is shown that replaces the four-body cluster (Space Shuttle Orbiter (SSO), external tank, and two solid rocket boosters) with a simpler two-body cluster (SSO and liquid rocket booster/external tank). At staging velocity, the booster unit (liquid-fueled booster engines and vehicle support structure) is jettisoned while the remaining SSO and supertank continues on to orbit. The simpler two-bodied cluster reduces the processing and stack time until SSO mate from 57 days (for the solid rocket booster) to 20 days (for the liquid rocket booster). The areas in which liquid booster systems are superior to solid rocket boosters are discussed. Alternative and future generation vehicles are reviewed to reveal greater performance and operations enhancements with more modifications to the current methods of propulsion design philosophy, e.g., combined cycle engines, and concentric propellant tanks.
Persistence in a single species CSTR model with suspended flocs and wall attached biofilms.
Mašić, Alma; Eberl, Hermann J
2012-04-01
We consider a mathematical model for a bacterial population in a continuously stirred tank reactor (CSTR) with wall attachment. This is a modification of the Freter model, in which we model the sessile bacteria as a microbial biofilm. Our analysis indicates that the results of the algebraically simpler original Freter model largely carry over. In a computational simulation study, we find that the vast majority of bacteria in the reactor will eventually be sessile. However, we also find that suspended biomass is relatively more efficient in removing substrate from the reactor than biofilm bacteria.
Benefits of detailed models of muscle activation and mechanics
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.
F-15B QuietSpike(TradeMark) Aeroservoelastic Flight Test Data Analysis
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
System identification or mathematical modelling is utilised in the aerospace community for the development of simulation models for robust control law design. These models are often described as linear, time-invariant processes and assumed to be uniform throughout the flight envelope. Nevertheless, it is well known that the underlying process is inherently nonlinear. The reason for utilising a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B QuietSpike(TradeMark) aeroservoelastic flight test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance for the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B QuietSpike(TradeMark) aeroservoelastic flight test data for several flight conditions (Mach number) that (i) linear models are inefficient for modelling aeroservoelastic data, (ii) nonlinear identification provides a parsimonious model description whilst providing a high percent fit for cross-validated data and (iii) the model structure and parameters vary as the flight condition is altered.
Target modelling for SAR image simulation
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2014-10-01
This paper examines target models that might be used in simulations of Synthetic Aperture Radar imagery. We examine the basis for scattering phenomena in SAR, and briefly review the Swerling target model set, before considering extensions to this set discussed in the literature. Methods for simulating and extracting parameters for the extended Swerling models are presented. It is shown that in many cases the more elaborate extended Swerling models can be represented, to a high degree of fidelity, by simpler members of the model set. Further, it is shown that it is quite unlikely that these extended models would be selected when fitting models to typical data samples.
Optimal Data Transmission on MIMO OFDM Channels
2008-12-01
Channel State Information dB decibel DFT Discrete Fourier Transform DWTS Digital Wideband Transmission System ETSI European Telecommunications...me facultaram durante a minha infância e juventude , que em conjunto com seu permanente apoio e amor me permitiram sonhar e voar tão alto. Agradeço...transmitter, it is far simpler to build such a system using an IDFT chip, generate the overall OFDM signal in baseband and digital format, and finally
NASA Astrophysics Data System (ADS)
Almbladh, C.-O.; Morales, A. L.
1989-02-01
Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.
One-point fitting of the flux density produced by a heliostat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collado, Francisco J.
Accurate and simple models for the flux density reflected by an isolated heliostat should be one of the basic tools for the design and optimization of solar power tower systems. In this work, the ability and the accuracy of the Universidad de Zaragoza (UNIZAR) and the DLR (HFCAL) flux density models to fit actual energetic spots are checked against heliostat energetic images measured at Plataforma Solar de Almeria (PSA). Both the fully analytic models are able to acceptably fit the spot with only one-point fitting, i.e., the measured maximum flux. As a practical validation of this one-point fitting, the interceptmore » percentage of the measured images, i.e., the percentage of the energetic spot sent by the heliostat that gets the receiver surface, is compared with the intercept calculated through the UNIZAR and HFCAL models. As main conclusions, the UNIZAR and the HFCAL models could be quite appropriate tools for the design and optimization, provided the energetic images from the heliostats to be used in the collector field were previously analyzed. Also note that the HFCAL model is much simpler and slightly more accurate than the UNIZAR model. (author)« less
Lamb, Berton Lee; Burkardt, Nina
2008-01-01
When Linda Pilkey- Jarvis and Orrin Pilkey state in their article, "Useless Arithmetic," that "mathematical models are simplified, generalized representations of a process or system," they probably do not mean to imply that these models are simple. Rather, the models are simpler than nature and that is the heart of the problem with predictive models. We have had a long professional association with the developers and users of one of these simplifications of nature in the form of a mathematical model known as Physical Habitat Simulation (PHABSIM), which is part of the Instream Flow Incremental Methodology (IFIM). The IFIM is a suite of techniques, including PHABSIM, that allows the analyst to incorporate hydrology , hydraulics, habitat, water quality, stream temperature, and other variables into a tradeoff analysis that decision makers can use to design a flow regime to meet management objectives (Stalnaker et al. 1995). Although we are not the developers of the IFIM, we have worked with those who did design it, and we have tried to understand how the IFIM and PHABSIM are actually used in decision making (King, Burkardt, and Clark 2006; Lamb 1989).
Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems
NASA Astrophysics Data System (ADS)
Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka
2018-06-01
One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.
Violation of the 2nd Law of Thermodynamics in the Quantum Microworld
NASA Astrophysics Data System (ADS)
Čápek, V.; Frege, O.
2002-05-01
For one open quantum system recently reported to work as a perpetuum mobile of the second kind, basic equations providing basis for discussion of physics beyond the system activity are rederived in an appreciably simpler manner. The equations become exact in one specific scaling limit corresponding to the physical regime where internal processes (relaxations) in the system are commensurable or even slower than relaxation processes induced by bath. In the high-temperature (i.e. classical) limit, the system ceases to work, i.e., validity of the second law is reestablished.
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Real-Time, High-Frequency QRS Electrocardiograph; Software for Improved Extraction of Data From Tape Storage; Radio System for Locating Emergency Workers; Software for Displaying High-Frequency Test Data; Capacitor-Chain Successive-Approximation ADC; Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver; Multilayer Patch Antenna Surrounded by a Metallic Wall; Software To Secure Distributed Propulsion Simulations; Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic; Meshed-Pumpkin Super-Pressure Balloon Design; Corrosion Inhibitors as Penetrant Dyes for Radiography; Transparent Metal-Salt-Filled Polymeric Radiation Shields; Lightweight Energy Absorbers for Blast Containers; Brush-Wheel Samplers for Planetary Exploration; Dry Process for Making Polyimide/ Carbon-and-Boron-Fiber Tape; Relatively Inexpensive Rapid Prototyping of Small Parts; Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters; Alternative Electrochemical Systems for Ozonation of Water; Interferometer for Measuring Displacement to Within 20 pm; UV-Enhanced IR Raman System for Identifying Biohazards; Prognostics Methodology for Complex Systems; Algorithms for Haptic Rendering of 3D Objects; Modeling and Control of Aerothermoelastic Effects; Processing Digital Imagery to Enhance Perceptions of Realism; Analysis of Designs of Space Laboratories; Shields for Enhanced Protection Against High-Speed Debris; Study of Dislocation-Ordered In(x)Ga(1-x)As/GaAs Quantum Dots; and Tilt-Sensitivity Analysis for Space Telescopes.
Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S
2014-01-01
In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.
Building mental models by dissecting physical models.
Srivastava, Anveshna
2016-01-01
When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.
An implicit semianalytic numerical method for the solution of nonequilibrium chemistry problems
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.; Gnoffo, P. A.; Boughner, R. E.
1974-01-01
The first order differential equation form systems of equations. They are solved by a simple and relatively accurate implicit semianalytic technique which is derived from a quadrature solution of the governing equation. This method is mathematically simpler than most implicit methods and has the exponential nature of the problem embedded in the solution.
Concerning the Integral dx/x[superscript m] (1+x)
ERIC Educational Resources Information Center
Walters, William; Huber, Michael
2010-01-01
Consider the integral dx/x[superscript m] (1+x). In the "CRC Standard Mathematical Tables," this integral can require repeated integral evaluations. Enter this integral into your favourite computer algebra system, and the results may be unrecognizable. In this article, we seek to provide a simpler evaluation for integrals of this form. We state up…
Solar Cooling for Buildings. Workshop Proceedings (Los Angeles, California, February 6-8, 1974).
ERIC Educational Resources Information Center
de Winter, Francis, Ed.
A consensus has developed among U.S. solar researchers that the solar-powered cooling of buildings is an important topic. Most solar heating systems are technically simpler, and more highly developed, than solar cooling devices are. The determination of the best design concept for any particular application is not a simple process. Significant…
Forest control and regulation ... a comparison of traditional methods and alternatives
LeRoy C. Hennes; Michael J. Irving; Daniel I. Navon
1971-01-01
Two traditional techniques of forest control and regulation-formulas and area-volume check-are compared to linear programing, as used in a new computerized planning system called Timber Resource Allocation Method ( Timber RAM). Inventory data from a National Forest in California illustrate how each technique is used. The traditional methods are simpler to apply and...
ERIC Educational Resources Information Center
Hoffman, LaVae M.
2009-01-01
Purpose: This research investigated the applicability of the index of narrative microstructure (INMIS; L. M. Justice et al., 2006) system for narratives that were elicited through a wordless picture book context. In addition, the viability of an alternative, simpler metric was explored. Method: Narrative transcripts using the "Frog, Where Are…
The Media Environment of the '90s: A Period of Danger for Newspaper Journalism?
ERIC Educational Resources Information Center
McManus, John
The news media of the 1990s will probably not use videotext systems or three-dimensional holograms to replace the newspaper. Instead, simpler combinations of news media that mimic the characteristics of print may replace advertising in newspapers, causing them to downgrade journalism or increase subscription cost, thereby decreasing circulation…
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.
ERIC Educational Resources Information Center
Hill, David R.
1978-01-01
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…
Stem cells in the Drosophila digestive system.
Zeng, Xiankun; Chauhan, Chhavi; Hou, Steven X
2013-01-01
Adult stem cells maintain tissue homeostasis by continuously replenishing damaged, aged and dead cells in any organism. Five types of region and organ-specific multipotent adult stem cells have been identified in the Drosophila digestive system: intestinal stem cells (ISCs) in the posterior midgut; hindgut intestinal stem cells (HISCs) at the midgut/hindgut junction; renal and nephric stem cells (RNSCs) in the Malpighian Tubules; type I gastric stem cells (GaSCs) at foregut/midgut junction; and type II gastric stem cells (GSSCs) at the middle of the midgut. Despite the fact that each type of stem cell is unique to a particular organ, they share common molecular markers and some regulatory signaling pathways. Due to the simpler tissue structure, ease of performing genetic analysis, and availability of abundant mutants, Drosophila serves as an elegant and powerful model system to study complex stem cell biology. The recent discoveries, particularly in the Drosophila ISC system, have greatly advanced our understanding of stem cell self-renewal, differentiation, and the role of stem cells play in tissue homeostasis/regeneration and adaptive tissue growth.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
A Dissipative Systems Theory for FDTD With Application to Stability Analysis and Subgridding
NASA Astrophysics Data System (ADS)
Bekmambetova, Fadime; Zhang, Xinyue; Triverio, Piero
2017-02-01
This paper establishes a far-reaching connection between the Finite-Difference Time-Domain method (FDTD) and the theory of dissipative systems. The FDTD equations for a rectangular region are written as a dynamical system having the magnetic and electric fields on the boundary as inputs and outputs. Suitable expressions for the energy stored in the region and the energy absorbed from the boundaries are introduced, and used to show that the FDTD system is dissipative under a generalized Courant-Friedrichs-Lewy condition. Based on the concept of dissipation, a powerful theoretical framework to investigate the stability of FDTD methods is devised. The new method makes FDTD stability proofs simpler, more intuitive, and modular. Stability conditions can indeed be given on the individual components (e.g. boundary conditions, meshes, embedded models) instead of the whole coupled setup. As an example of application, we derive a new subgridding method with material traverse, arbitrary grid refinement, and guaranteed stability. The method is easy to implement and has a straightforward stability proof. Numerical results confirm its stability, low reflections, and ability to handle material traverse.
ERIC Educational Resources Information Center
Tarhini, Ali; Hassouna, Mohammad; Abbasi, Muhammad Sharif; Orozco, Jorge
2015-01-01
Simpler is better. There are a lot of "needs" in e-Learning, and there's often a limit to the time, talent, and money that can be thrown at them individually. Contemporary pedagogy in technology and engineering disciplines, within the higher education context, champion instructional designs that emphasize peer instruction and rich…
Rea, Shane L.; Graham, Brett H.; Nakamaru-Ogiso, Eiko; Kar, Adwitiya; Falk, Marni J.
2013-01-01
The extensive conservation of mitochondrial structure, composition, and function across evolution offers a unique opportunity to expand our understanding of human mitochondrial biology and disease. By investigating the biology of much simpler model organisms, it is often possible to answer questions that are unreachable at the clinical level. Here, we review the relative utility of four different model organisms, namely the bacteria Escherichia coli, the yeast Saccharomyces cerevisiae, the nematode Caenorhabditis elegans and the fruit fly Drosophila melanogaster, in studying the role of mitochondrial proteins relevant to human disease. E. coli are single cell, prokaryotic bacteria that have proven to be a useful model system in which to investigate mitochondrial respiratory chain protein structure and function. S. cerevisiae is a single-celled eukaryote that can grow equally well by mitochondrial-dependent respiration or by ethanol fermentation, a property that has proven to be a veritable boon for investigating mitochondrial functionality. C. elegans is a multi-cellular, microscopic worm that is organized into five major tissues and has proven to be a robust model animal for in vitro and in vivo studies of primary respiratory chain dysfunction and its potential therapies in humans. Studied for over a century, D. melanogaster is a classic metazoan model system offering an abundance of genetic tools and reagents that facilitates investigations of mitochondrial biology using both forward and reverse genetics. The respective strengths and limitations of each species relative to mitochondrial studies are explored. In addition, an overview is provided of major discoveries made in mitochondrial biology in each of these four model systems. PMID:20818735
Modeling the surface tension of complex, reactive organic-inorganic mixtures
NASA Astrophysics Data System (ADS)
Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. F.
2013-01-01
Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN) ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling fits and goodness of fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.
Lift Recovery for AFC-Enabled High Lift System
NASA Technical Reports Server (NTRS)
Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Gissen, Abraham N.; Whalen, Edward A.
2017-01-01
This project is a continuation of the NASA AFC-Enabled Simplified High-Lift System Integration Study contract (NNL10AA05B) performed by Boeing under the Fixed Wing Project. This task is motivated by the simplified high-lift system, which is advantageous due to the simpler mechanical system, reduced actuation power and lower maintenance costs. Additionally, the removal of the flap track fairings associated with conventional high-lift systems renders a more efficient aerodynamic configuration. Potentially, these benefits translate to a approx. 2.25% net reduction in fuel burn for a twin-engine, long-range airplane.
Lattice Dynamics of Rare Gas Multilayers on the Ag(111) Surface. Theory and Experiment.
1985-08-01
phonon spectra generated from some simpler models, such as a nearest neighbor central force model, and also use of the Lennard - Jones ) Sa potential ... potentials and one from the Lennard - jones 6-12 potential , foc the ehr.. rare aases. The value for ko was defined from the experi- A 4. 7’,.V 19 mentally...derivative divided by the adsorbate mass. It is immediately obvious that the Barker pair potential value for ko is about 50% larger than the Lennard - Jones
1980-12-31
surfaces. Reactions involving the Pt(O)- triphenylphosphine complexes Pt(PPh 3)n, where n = 2, 3, 4, have been shown to have precise analogues on Pt...12], the triphenylphosphine (PPh 3 ) group is modeled by the simpler but chemically similar phosphine (PH3) group. The appropriate Pt-P bond distances...typically refractory oxides ) are of sufficient magnitude as to suggest significant chemical and electronic modifications of the metal at the metal-support
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Disruptive innovation for social change.
Christensen, Clayton M; Baumann, Heiner; Ruggles, Rudy; Sadtler, Thomas M
2006-12-01
Countries, organizations, and individuals around the globe spend aggressively to solve social problems, but these efforts often fail to deliver. Misdirected investment is the primary reason for that failure. Most of the money earmarked for social initiatives goes to organizations that are structured to support specific groups of recipients, often with sophisticated solutions. Such organizations rarely reach the broader populations that could be served by simpler alternatives. There is, however, an effective way to get to those underserved populations. The authors call it "catalytic innovation." Based on Clayton Christensen's disruptive-innovation model, catalytic innovations challenge organizational incumbents by offering simpler, good-enough solutions aimed at underserved groups. Unlike disruptive innovations, though, catalytic innovations are focused on creating social change. Catalytic innovators are defined by five distinct qualities. First, they create social change through scaling and replication. Second, they meet a need that is either overserved (that is, the existing solution is more complex than necessary for many people) or not served at all. Third, the products and services they offer are simpler and cheaper than alternatives, but recipients view them as good enough. Fourth, they bring in resources in ways that initially seem unattractive to incumbents. And fifth, they are often ignored, put down, or even encouraged by existing organizations, which don't see the catalytic innovators' solutions as viable. As the authors show through examples in health care, education, and economic development, both nonprofit and for-profit groups are finding ways to create catalytic innovation that drives social change.
Hybrid slab-microchannel gel electrophoresis system
Balch, Joseph W.; Carrano, Anthony V.; Davidson, James C.; Koo, Jackson C.
1998-01-01
A hybrid slab-microchannel gel electrophoresis system. The hybrid system permits the fabrication of isolated microchannels for biomolecule separations without imposing the constraint of a totally sealed system. The hybrid system is reusable and ultimately much simpler and less costly to manufacture than a closed channel plate system. The hybrid system incorporates a microslab portion of the separation medium above the microchannels, thus at least substantially reducing the possibility of non-uniform field distribution and breakdown due to uncontrollable leakage. A microslab of the sieving matrix is built into the system by using plastic spacer materials and is used to uniformly couple the top plate with the bottom microchannel plate.
Unexpected Results are Usually Wrong, but Often Interesting
NASA Astrophysics Data System (ADS)
Huber, M.
2014-12-01
In climate modeling, an unexpected result is usually wrong, arising from some sort of mistake. Despite the fact that we all bemoan uncertainty in climate, the field is underlain by a robust, successful body of theory and any properly conducted modeling experiment is posed and conducted within that context. Consequently, if results from a complex climate model disagree with theory or from expectations from simpler models, much skepticism is in order. But, this exposes the fundamental tension of using complex, sophisticated models. If simple models and theory were perfect there would be no reason for complex models--the entire point of sophisticated models is to see if unexpected phenomena arise as emergent properties of the system. In this talk, I will step through some paleoclimate examples, drawn from my own work, of unexpected results that emerge from complex climate models arising from mistakes of two kinds. The first kind of mistake, is what I call a 'smart mistake'; it is an intentional incorporation of assumptions, boundary conditions, or physics that is in violation of theoretical or observational constraints. The second mistake, a 'dumb mistake', is just that, an unintentional violation. Analysis of such mistaken simulations provides some potentially novel and certainly interesting insights into what is possible and right in paleoclimate modeling by forcing the reexamination of well-held assumptions and theories.
Cirrus Parcel Model Comparison Project. Phase 1
NASA Technical Reports Server (NTRS)
Lin, Ruei-Fong; Starr, David O'C.; DeMott, Paul J.; Cotton, Richard; Jensen, Eric; Sassen, Kenneth
2000-01-01
The Cirrus Parcel Model Comparison (CPMC) is a project of the GEWEX Cloud System Study Working Group on Cirrus Cloud Systems (GCSS WG2). The primary goal of this project is to identify cirrus model sensitivities to the state of our knowledge of nucleation and microphysics. Furthermore, the common ground of the findings may provide guidelines for models with simpler cirrus microphysics modules. We focus on the nucleation regimes of the warm (parcel starting at -40 C and 340 hPa) and cold (-60 C and 170 hPa) cases studied in the GCSS WG2 Idealized Cirrus Model Comparison Project. Nucleation and ice crystal growth were forced through an externally imposed rate of lift and consequent adiabatic cooling. The background haze particles are assumed to be lognormally-distributed H2SO4 particles. Only the homogeneous nucleation mode is allowed to form ice crystals in the HN-ONLY runs; all nucleation modes are switched on in the ALL-MODE runs. Participants were asked to run the HN-lambda-fixed runs by setting lambda = 2 (lambda is further discussed in section 2) or tailoring the nucleation rate calculation in agreement with lambda = 2 (exp 1). The depth of parcel lift (800 m) was set to assure that parcels underwent complete transition through the nucleation regime to a stage of approximate equilibrium between ice mass growth and vapor supplied by the specified updrafts.
Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2016-07-01
Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
NASA Astrophysics Data System (ADS)
Zheng, Yuan-Fang
A three-dimensional, five link biped system is established. Newton-Euler state space formulation is employed to derive the equations of the system. The constraint forces involved in the equations can be eliminated by projection onto a smaller state space system for deriving advanced control laws. A model-referenced adaptive control scheme is developed to control the system. Digital computer simulations of point to point movement are carried out to show that the model-referenced adaptive control increases the dynamic range and speeds up the response of the system in comparison with linear and nonlinear feedback control. Further, the implementation of the controller is simpler. Impact effects of biped contact with the environment are modeled and studied. The instant velocity change at the moment of impact is derived as a function of the biped state and contact speed. The effects of impact on the state, as well as constraints are studied in biped landing on heels and toes simultaneously or on toes first. Rate and nonlinear position feedback are employed for stability of the biped after the impact. The complex structure of the foot is properly modeled. A spring and dashpot pair is suggested to represent the action of plantar fascia during the impact. This action prevents the arch of the foot from collapsing. A mathematical model of the skeletal muscle is discussed. A direct relationship between the stimulus rate and the active state is established. A piecewise linear relation between the length of the contractile element and the isometric force is considered. Hill's characteristic equation is maintained for determining the actual output force during different shortening velocities. A physical threshold model is proposed for recruitment which encompasses the size principle, its manifestations and exceptions to the size principle. Finally the role of spindle feedback in stability of the model is demonstrated by study of a pair of muscles.
Web-services-based spatial decision support system to facilitate nuclear waste siting
NASA Astrophysics Data System (ADS)
Huang, L. Xinglai; Sheng, Grant
2006-10-01
The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.
SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y; Kadoya, N; Ito, K
Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less
Experimental non-classicality of an indivisible quantum system.
Lapkiewicz, Radek; Li, Peizhe; Schaeff, Christoph; Langford, Nathan K; Ramelow, Sven; Wieśniak, Marcin; Zeilinger, Anton
2011-06-22
In contrast to classical physics, quantum theory demands that not all properties can be simultaneously well defined; the Heisenberg uncertainty principle is a manifestation of this fact. Alternatives have been explored--notably theories relying on joint probability distributions or non-contextual hidden-variable models, in which the properties of a system are defined independently of their own measurement and any other measurements that are made. Various deep theoretical results imply that such theories are in conflict with quantum mechanics. Simpler cases demonstrating this conflict have been found and tested experimentally with pairs of quantum bits (qubits). Recently, an inequality satisfied by non-contextual hidden-variable models and violated by quantum mechanics for all states of two qubits was introduced and tested experimentally. A single three-state system (a qutrit) is the simplest system in which such a contradiction is possible; moreover, the contradiction cannot result from entanglement between subsystems, because such a three-state system is indivisible. Here we report an experiment with single photonic qutrits which provides evidence that no joint probability distribution describing the outcomes of all possible measurements--and, therefore, no non-contextual theory--can exist. Specifically, we observe a violation of the Bell-type inequality found by Klyachko, Can, Binicioğlu and Shumovsky. Our results illustrate a deep incompatibility between quantum mechanics and classical physics that cannot in any way result from entanglement.
Goal Directed Model Inversion: A Study of Dynamic Behavior
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Compton, Michael; Raghavan, Bharathi; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Goal Directed Model Inversion (GDMI) is an algorithm designed to generalize supervised learning to the case where target outputs are not available to the learning system. The output of the learning system becomes the input to some external device or transformation, and only the output of this device or transformation can be compared to a desired target. The fundamental driving mechanism of GDMI is to learn from success. Given that a wrong outcome is achieved, one notes that the action that produced that outcome 0 "would have been right if the outcome had been the desired one." The algorithm then proceeds as follows: (1) store the action that produced the wrong outcome as a "target" (2) redefine the wrong outcome as a desired goal (3) submit the new desired goal to the system (4) compare the new action with the target action and modify the system by using a suitable algorithm for credit assignment (Back propagation in our example) (5) resubmit the original goal. Prior publications by our group in this area focused on demonstrating empirical results based on the inverse kinematic problem for a simulated robotic arm. In this paper we apply the inversion process to much simpler analytic functions in order to elucidate the dynamic behavior of the system and to determine the sensitivity of the learning process to various parameters. This understanding will be necessary for the acceptance of GDMI as a practical tool.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
NASA Technical Reports Server (NTRS)
Aragone, C.
1993-01-01
We introduce a new set of squeezed states through the coupled two-mode squeezed operator. It is shown that their behavior is simpler than the correlated coherent states introduced by Dodonov, Kurmyshev, and Man'ko in order to quantum mechanically describe the Landau system, i.e., a planar charged particle in a uniform magnetic field. We compare results for both sets of squeezed states.
ERIC Educational Resources Information Center
Southworth, Glen
Reducing the costs of teaching by television through slow-scan methods is discussed. Conventional television is costly to use, largely because the wide-band communications circuits required are in limited supply. One technical answer is bandwidth compression to fit an image into less spectrum space. A simpler and far less costly answer is to…
NASA Astrophysics Data System (ADS)
Huyakorn, P. S.; Panday, S.; Wu, Y. S.
1994-06-01
A three-dimensional, three-phase numerical model is presented for stimulating the movement on non-aqueous-phase liquids (NAPL's) through porous and fractured media. The model is designed for practical application to a wide variety of contamination and remediation scenarios involving light or dense NAPL's in heterogeneous subsurface systems. The model formulation is first derived for three-phase flow of water, NAPL and air (or vapor) in porous media. The formulation is then extended to handle fractured systems using the dual-porosity and discrete-fracture modeling approaches The model accommodates a wide variety of boundary conditions, including withdrawal and injection well conditions which are treated rigorously using fully implicit schemes. The three-phase of formulation collapses to its simpler forms when air-phase dynamics are neglected, capillary effects are neglected, or two-phase-air-liquid, liquid-liquid systems with one or two active phases are considered. A Galerkin procedure with upstream weighting of fluid mobilities, storage matrix lumping, and fully implicit treatment of nonlinear coefficients and well conditions is used. A variety of nodal connectivity schemes leading to finite-difference, finite-element and hybrid spatial approximations in three dimensions are incorporated in the formulation. Selection of primary variables and evaluation of the terms of the Jacobian matrix for the Newton-Raphson linearized equations is discussed. The various nodal lattice options, and their significance to the computational time and memory requirements with regards to the block-Orthomin solution scheme are noted. Aggressive time-stepping schemes and under-relaxation formulas implemented in the code further alleviate the computational burden.
Estimating the system price of redox flow batteries for grid storage
NASA Astrophysics Data System (ADS)
Ha, Seungbum; Gallagher, Kevin G.
2015-11-01
Low-cost energy storage systems are required to support extensive deployment of intermittent renewable energy on the electricity grid. Redox flow batteries have potential advantages to meet the stringent cost target for grid applications as compared to more traditional batteries based on an enclosed architecture. However, the manufacturing process and therefore potential high-volume production price of redox flow batteries is largely unquantified. We present a comprehensive assessment of a prospective production process for aqueous all vanadium flow battery and nonaqueous lithium polysulfide flow battery. The estimated investment and variable costs are translated to fixed expenses, profit, and warranty as a function of production volume. When compared to lithium-ion batteries, redox flow batteries are estimated to exhibit lower costs of manufacture, here calculated as the unit price less materials costs, owing to their simpler reactor (cell) design, lower required area, and thus simpler manufacturing process. Redox flow batteries are also projected to achieve the majority of manufacturing scale benefits at lower production volumes as compared to lithium-ion. However, this advantage is offset due to the dramatically lower present production volume of flow batteries compared to competitive technologies such as lithium-ion.
Chapter 11: Concentrating Solar Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turchi, Craig S; Stekli, J.; Bueno, P. C.
2017-01-02
This chapter summarizes the applications of the supercritical CO2 (sCO2) Brayton cycle in concentrating solar power (CSP) plants. The design and operation of CSP plants are reviewed to highlight the requirements for the power cycle and attributes that are advantageous for the solar-thermal application. The sCO2 Brayton cycle offers the potential of higher cycle efficiency versus superheated or supercritical steam cycles at temperatures relevant for CSP applications. In addition, Brayton cycle systems using sCO2 are anticipated to have smaller weight and volume, lower thermal mass, and less complex power blocks compared with Rankine cycles due to the higher density ofmore » the fluid and simpler cycle design. The simpler machinery and compact size of the sCO2 process may also reduce the installation, maintenance, and operation cost of the system. Power cycle capacities in the range of 10-150 MWe are anticipated for the CSP application. In this chapter, we explore sCO2 Brayton cycle configurations that have attributes that are desirable from the perspective of a CSP application, such as the ability to accommodate dry cooling and daily cycling, as well as integration with thermal energy storage.« less
Flatness-based model inverse for feed-forward braking control
NASA Astrophysics Data System (ADS)
de Vries, Edwin; Fehn, Achim; Rixen, Daniel
2010-12-01
For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.
Modelling DNA origami self-assembly at the domain level.
Dannenberg, Frits; Dunn, Katherine E; Bath, Jonathan; Kwiatkowska, Marta; Turberfield, Andrew J; Ouldridge, Thomas E
2015-10-28
We present a modelling framework, and basic model parameterization, for the study of DNA origami folding at the level of DNA domains. Our approach is explicitly kinetic and does not assume a specific folding pathway. The binding of each staple is associated with a free-energy change that depends on staple sequence, the possibility of coaxial stacking with neighbouring domains, and the entropic cost of constraining the scaffold by inserting staple crossovers. A rigorous thermodynamic model is difficult to implement as a result of the complex, multiply connected geometry of the scaffold: we present a solution to this problem for planar origami. Coaxial stacking of helices and entropic terms, particularly when loop closure exponents are taken to be larger than those for ideal chains, introduce interactions between staples. These cooperative interactions lead to the prediction of sharp assembly transitions with notable hysteresis that are consistent with experimental observations. We show that the model reproduces the experimentally observed consequences of reducing staple concentration, accelerated cooling, and absent staples. We also present a simpler methodology that gives consistent results and can be used to study a wider range of systems including non-planar origami.
Modelling DNA origami self-assembly at the domain level
NASA Astrophysics Data System (ADS)
Dannenberg, Frits; Dunn, Katherine E.; Bath, Jonathan; Kwiatkowska, Marta; Turberfield, Andrew J.; Ouldridge, Thomas E.
2015-10-01
We present a modelling framework, and basic model parameterization, for the study of DNA origami folding at the level of DNA domains. Our approach is explicitly kinetic and does not assume a specific folding pathway. The binding of each staple is associated with a free-energy change that depends on staple sequence, the possibility of coaxial stacking with neighbouring domains, and the entropic cost of constraining the scaffold by inserting staple crossovers. A rigorous thermodynamic model is difficult to implement as a result of the complex, multiply connected geometry of the scaffold: we present a solution to this problem for planar origami. Coaxial stacking of helices and entropic terms, particularly when loop closure exponents are taken to be larger than those for ideal chains, introduce interactions between staples. These cooperative interactions lead to the prediction of sharp assembly transitions with notable hysteresis that are consistent with experimental observations. We show that the model reproduces the experimentally observed consequences of reducing staple concentration, accelerated cooling, and absent staples. We also present a simpler methodology that gives consistent results and can be used to study a wider range of systems including non-planar origami.
Müllerová, Ludmila; Dubský, Pavel; Gaš, Bohuslav
2015-03-06
Interactions among analyte forms that undergo simultaneous dissociation/protonation and complexation with multiple selectors take the shape of a highly interconnected multi-equilibrium scheme. This makes it difficult to express the effective mobility of the analyte in these systems, which are often encountered in electrophoretical separations, unless a generalized model is introduced. In the first part of this series, we presented the theory of electromigration of a multivalent weakly acidic/basic/amphoteric analyte undergoing complexation with a mixture of an arbitrary number of selectors. In this work we demonstrate the validity of this concept experimentally. The theory leads to three useful perspectives, each of which is closely related to the one originally formulated for simpler systems. If pH, IS and the selector mixture composition are all kept constant, the system is treated as if only a single analyte form interacted with a single selector. If the pH changes at constant IS and mixture composition, the already well-established models of a weakly acidic/basic analyte interacting with a single selector can be employed. Varying the mixture composition at constant IS and pH leads to a situation where virtually a single analyte form interacts with a mixture of selectors. We show how to switch between the three perspectives in practice and confirm that they can be employed interchangeably according to the specific needs by measurements performed in single- and dual-selector systems at a pH where the analyte is fully dissociated, partly dissociated or fully protonated. Weak monoprotic analyte (R-flurbiprofen) and two selectors (native β-cyclodextrin and monovalent positively charged 6-monodeoxy-6-monoamino-β-cyclodextrin) serve as a model system. Copyright © 2015 Elsevier B.V. All rights reserved.
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
2017-10-05
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
A novel patient-specific model to compute coronary fractional flow reserve.
Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo
2014-09-01
The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
Multiplex High-Throughput Targeted Proteomic Assay To Identify Induced Pluripotent Stem Cells.
Baud, Anna; Wessely, Frank; Mazzacuva, Francesca; McCormick, James; Camuzeaux, Stephane; Heywood, Wendy E; Little, Daniel; Vowles, Jane; Tuefferd, Marianne; Mosaku, Olukunbi; Lako, Majlinda; Armstrong, Lyle; Webber, Caleb; Cader, M Zameel; Peeters, Pieter; Gissen, Paul; Cowley, Sally A; Mills, Kevin
2017-02-21
Induced pluripotent stem cells have great potential as a human model system in regenerative medicine, disease modeling, and drug screening. However, their use in medical research is hampered by laborious reprogramming procedures that yield low numbers of induced pluripotent stem cells. For further applications in research, only the best, competent clones should be used. The standard assays for pluripotency are based on genomic approaches, which take up to 1 week to perform and incur significant cost. Therefore, there is a need for a rapid and cost-effective assay able to distinguish between pluripotent and nonpluripotent cells. Here, we describe a novel multiplexed, high-throughput, and sensitive peptide-based multiple reaction monitoring mass spectrometry assay, allowing for the identification and absolute quantitation of multiple core transcription factors and pluripotency markers. This assay provides simpler and high-throughput classification into either pluripotent or nonpluripotent cells in 7 min analysis while being more cost-effective than conventional genomic tests.
An integrated approach to rotorcraft human factors research
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Hartzell, E. James; Voorhees, James W.; Bucher, Nancy M.; Shively, R. Jay
1988-01-01
As the potential of civil and military helicopters has increased, more complex and demanding missions in increasingly hostile environments have been required. Users, designers, and manufacturers have an urgent need for information about human behavior and function to create systems that take advantage of human capabilities, without overloading them. Because there is a large gap between what is known about human behavior and the information needed to predict pilot workload and performance in the complex missions projected for pilots of advanced helicopters, Army and NASA scientists are actively engaged in Human Factors Research at Ames. The research ranges from laboratory experiments to computational modeling, simulation evaluation, and inflight testing. Information obtained in highly controlled but simpler environments generates predictions which can be tested in more realistic situations. These results are used, in turn, to refine theoretical models, provide the focus for subsequent research, and ensure operational relevance, while maintaining predictive advantages. The advantages and disadvantages of each type of research are described along with examples of experimental results.
Hughes, S; Woollard, A
2017-01-01
Runx genes have been identified in all metazoans and considerable conservation of function observed across a wide range of phyla. Thus, insight gained from studying simple model organisms is invaluable in understanding RUNX biology in higher animals. Consequently, this chapter will focus on the Runx genes in the diploblasts, which includes sea anemones and sponges, as well as the lower triploblasts, including the sea urchin, nematode, planaria and insect. Due to the high degree of functional redundancy amongst vertebrate Runx genes, simpler model organisms with a solo Runx gene, like C. elegans, are invaluable systems in which to probe the molecular basis of RUNX function within a whole organism. Additionally, comparative analyses of Runx sequence and function allows for the development of novel evolutionary insights. Strikingly, recent data has emerged that reveals the presence of a Runx gene in a protist, demonstrating even more widespread occurrence of Runx genes than was previously thought. This review will summarize recent progress in using invertebrate organisms to investigate RUNX function during development and regeneration, highlighting emerging unifying themes.
A symbiotic approach to fluid equations and non-linear flux-driven simulations of plasma dynamics
NASA Astrophysics Data System (ADS)
Halpern, Federico
2017-10-01
The fluid framework is ubiquitous in studies of plasma transport and stability. Typical forms of the fluid equations are motivated by analytical work dating several decades ago, before computer simulations were indispensable, and can be, therefore, not optimal for numerical computation. We demonstrate a new first-principles approach to obtaining manifestly consistent, skew-symmetric fluid models, ensuring internal consistency and conservation properties even in discrete form. Mass, kinetic, and internal energy become quadratic (and always positive) invariants of the system. The model lends itself to a robust, straightforward discretization scheme with inherent non-linear stability. A simpler, drift-ordered form of the equations is obtained, and first results of their numerical implementation as a binary framework for bulk-fluid global plasma simulations are demonstrated. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Theory Program, under Award No. DE-FG02-95ER54309.
A heuristic simulation model of Lake Ontario circulation and mass balance transport
McKenna, J.E.; Chalupnicki, M.A.
2011-01-01
The redistribution of suspended organisms and materials by large-scale currents is part of natural ecological processes in large aquatic systems but can contribute to ecosystem disruption when exotic elements are introduced into the system. Toxic compounds and planktonic organisms spend various lengths of time in suspension before settling to the bottom or otherwise being removed. We constructed a simple physical simulation model, including the influence of major tributaries, to qualitatively examine circulation patterns in Lake Ontario. We used a simple mass balance approach to estimate the relative water input to and export from each of 10 depth regime-specific compartments (nearshore vs. offshore) comprising Lake Ontario. Despite its simplicity, our model produced circulation patterns similar to those reported by more complex studies in the literature. A three-gyre pattern, with the classic large counterclockwise central lake circulation, and a simpler two-gyre system were both observed. These qualitative simulations indicate little offshore transport along the south shore, except near the mouths of the Niagara River and Oswego River. Complex flow structure was evident, particularly near the Niagara River mouth and in offshore waters of the eastern basin. Average Lake Ontario residence time is 8 years, but the fastest model pathway indicated potential transport of plankton through the lake in as little as 60 days. This simulation illustrates potential invasion pathways and provides rough estimates of planktonic larval dispersal or chemical transport among nearshore and offshore areas of Lake Ontario. ?? 2011 Taylor & Francis.
ERIC Educational Resources Information Center
Burgess, Carol A.
Sixth grade students can use cinquain poems to explore language, learn grammar, and write creatively. Before learning about cinquains, students should be introduced to simpler poetic forms. To introduce cinquains, the teacher writes a simple example on the board and has the students informally figure out the parts of speech and grammatical…
Toward a Standardized ODH Analysis Technique
Degraff, Brian D.
2016-12-01
Standardization of ODH analysis and mitigation policy thus represents an opportunity for the cryogenic community. There are several benefits for industry and government facilities to develop an applicable unified standard for ODH. The number of reviewers would increase, and review projects across different facilities would be simpler. Here, it would also present the opportunity for the community to broaden the development of expertise in modeling complicated flow geometries.
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Mode-locking behavior of Izhikevich neurons under periodic external forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, AmirAli; Large, Edward W.
2017-06-01
Many neurons in the auditory system of the brain must encode periodic signals. These neurons under periodic stimulation display rich dynamical states including mode locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated sounds can lead to various forms of n :m mode-locked states, in which a neuron fires n action potentials per m cycles of the stimulus. Here, we study mode-locking in the Izhikevich neurons, a reduced model of the Hodgkin-Huxley neurons. The Izhikevich model is much simpler in terms of the dimension of the coupled nonlinear differential equations compared with other existing models, but excellent for generating the complex spiking patterns observed in real neurons. We obtained the regions of existence of the various mode-locked states on the frequency-amplitude plane, called Arnold tongues, for the Izhikevich neurons. Arnold tongue analysis provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. We find these tongues for both class-1 and class-2 excitable neurons in both deterministic and noisy regimes.
Liquid phase products and solid deposit formation from thermally stressed model jet fuels
NASA Technical Reports Server (NTRS)
Kim, W. S.; Bittker, D. A.
1984-01-01
The relationship between solid deposit formation and liquid degradation product concentration was studied for the high temperature (400 C) stressing of three hydrocarbon model fuels. A Jet Fuel Thermal Oxidation Tester was used to simulate actual engine fuel system conditions. The effects of fuel type, dissolved oxygen concentration, and hot surface contact time (reaction time) were studied. Effects of reaction time and removal of dissolved oxygen on deposit formation were found to be different for n-dodecane and for 2-ethylnaphthalene. When ten percent tetralin is added to n-dodecane to give a simpler model of an actual jet fuel, the tetralin inhibits both the deposit formation and the degradation of n-dodecane. For 2-ethylnaphthalene primary product analyses indicate a possible self-inhibition at long reaction times of the secondary reactions which form the deposit precursors. The mechanism of the primary breakdown of these fuels is suggested and the primary products which participate in these precursor-forming reactions are identified. Some implications of the results to the thermal degradation of real jet fuels are given.
NASA Astrophysics Data System (ADS)
Hou, X. Y.; Koh, C. G.; Kuang, K. S. C.; Lee, W. H.
2017-07-01
This paper investigates the capability of a novel piezoelectric sensor for low-frequency and low-amplitude vibration measurement. The proposed design effectively amplifies the input acceleration via two amplifying mechanisms and thus eliminates the use of the external charge amplifier or conditioning amplifier typically employed for measurement system. The sensor is also self-powered, i.e. no external power unit is required. Consequently, wiring and electrical insulation for on-site measurement are considerably simpler. In addition, the design also greatly reduces the interference from rotational motion which often accompanies the translational acceleration to be measured. An analytical model is developed based on a set of piezoelectric constitutive equations and beam theory. Closed-form expression is derived to correlate sensor geometry and material properties with its dynamic performance. Experimental calibration is then carried out to validate the analytical model. After calibration, experiments are carried out to check the feasibility of the new sensor in structural vibration detection. From experimental results, it is concluded that the proposed sensor is suitable for measuring low-frequency and low-amplitude vibrations.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
A VME-based software trigger system using UNIX processors
NASA Astrophysics Data System (ADS)
Atmur, Robert; Connor, David F.; Molzon, William
1997-02-01
We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.
Steady-State Algorithmic Analysis M/M/c Two-Priority Queues with Heterogeneous Rates.
1981-04-21
ALGORITHMIC ANALYSIS OF M/M/c TWO-PRIORITY QUEUES WITH HETEROGENEOUS RATES by Douglas R. Miller An algorithm for steady-state analysis of M/M/c nonpreemptive ...practical algorithm for systems involving more than two priority classes. The preemptive case is simpler than the nonpreemptive case; an algorithm for it...priority nonpreemptive queueing system with arrival rates 1 and X2 and service rates V and p42 * The state space can be described as follows. Let xi,j,k be
Analysis of a New Rocket-Based Combined-Cycle Engine Concept at Low Speed
NASA Technical Reports Server (NTRS)
Yungster, S.; Trefny, C. J.
1999-01-01
An analysis of the Independent Ramjet Stream (IRS) cycle is presented. The IRS cycle is a variation of the conventional ejector-Ramjet, and is used at low speed in a rocket-based combined-cycle (RBCC) propulsion system. In this new cycle, complete mixing between the rocket and ramjet streams is not required, and a single rocket chamber can be used without a long mixing duct. Furthermore, this concept allows flexibility in controlling the thermal choke process. The resulting propulsion system is intended to be simpler, more robust, and lighter than an ejector-ramjet. The performance characteristics of the IRS cycle are analyzed for a new single-stage-to-orbit (SSTO) launch vehicle concept, known as "Trailblazer." The study is based on a quasi-one-dimensional model of the rocket and air streams at speeds ranging from lift-off to Mach 3. The numerical formulation is described in detail. A performance comparison between the IRS and ejector-ramjet cycles is also presented.
Internal structure of vortices in a dipolar spinor Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Borgh, Magnus O.; Lovegrove, Justin; Ruostekoski, Janne
2017-04-01
We demonstrate how dipolar interactions (DI) can have pronounced effects on the structure of vortices in atomic spinor Bose-Einstein condensates and illustrate generic physical principles that apply across dipolar spinor systems. We then find and analyze the cores of singular non-Abelian vortices in a spin-3 52Cr condensate. Using a simpler spin-1 model system, we analyze the underlying dipolar physics and show how a dipolar healing length interacts with the hierarchy of healing lengths of the contact interaction and leads to simple criteria for the core structure: vortex core size is restricted to the shorter spin-dependent healing length when the interactions both favor the ground-state spin condition, but can conversely be enlarged by DI when interactions compete. We further demonstrate manifestations of spin-ordering induced by the DI anisotropy, including DI-dependent angular momentum of nonsingular vortices, as a result of competition with adaptation to rotation, and potentially observable internal vortex-core spin textures. We acknowledge financial support from the EPSRC.
Sutherland, R J; Lehmann, H
2011-06-01
We discuss very recent experiments with rodents addressing the idea that long-term memories initially depending on the hippocampus, over a prolonged period, become independent of it. No unambiguous recent evidence exists to substantiate that this occurs. Most experiments find that recent and remote memories are equally affected by hippocampus damage. Nearly all experiments that report spared remote memories suffer from two problems: retrieval could be based upon substantial regions of spared hippocampus and recent memory is tested at intervals that are of the same order of magnitude as cellular consolidation. Accordingly, we point the way beyond systems consolidation theories, both the Standard Model of Consolidation and the Multiple Trace Theory, and propose a simpler multiple storage site hypothesis. On this view, with event reiterations, different memory representations are independently established in multiple networks. Many detailed memories always depend on the hippocampus; the others may be established and maintained independently. Copyright © 2011 Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Making Interoperability Easier with the NASA Metadata Management Tool
NASA Astrophysics Data System (ADS)
Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.
2016-12-01
ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.
Preliminary SP-100/Stirling Heat Exchanger Designs
NASA Astrophysics Data System (ADS)
Schmitz, Paul; Tower, Leonard; Dawson, Ronald; Blue, Brain; Dunn, Pat
1994-07-01
Analytic modeling of several heat exchanger concepts to couple the SP-100 nuclear reactor primary lithium loop and the Space Stirling Power Convertor(SSPC)was performed. Four 25 kWe SSPC's are used to produce the required 100 kW of electrical power. This design work focused on the interface between a single SSPC and the primary lithium loop. Manifolding to separate and collect the four channel flow was not modeled. This work modeled two separate types of heat exchanger interfaces (conductive coupling and radiative coupling) to explore their relative advantages and disadvantages. The minimum mass design of the conductively coupled concepts was 18 kg or 0.73 kg/kWe for a single 25 kWe convertor. The minimum mass radiatively coupled concept was 41 kg or 1.64 kg/kWe. The direct conduction heat exchanger provides a lighter weight system because of its ability to operate the Stirling convertor evaporator at higher heat fluxes than those attainable by the radiatively coupled systems. Additionally the conductively coupled concepts had relatively small volumes and provide potentially simpler assembly. Their disadvantages were the tight tolerances and material joining problems associated with this refractory to superalloy interface. The advantages of the radiatively coupled designs were the minimal material interface problems.
Preliminary SP-100/Stirling heat exchanger designs
NASA Astrophysics Data System (ADS)
Schmitz, Paul; Tower, Leonard; Dawson, Ronald; Blue, Brian; Dunn, Pat
1993-12-01
Analytic modeling of several heat exchanger concepts to couple the SP-100 nuclear reactor primary lithium loop and the Space Stirling Power Convertor (SSPC) was performed. Four 25 kWe SSPC's are used to produce the required 100 kW of electrical power. This design work focused on the interface between a single SSPC and the primary lithium loop. Manifolding to separate and collect the four channel flow was not modeled. This work modeled two separate types of heat exchanger interfaces (conductive coupling and radiative coupling) to explore their relative advantages and disadvantages. The minimum mass design of the conductively coupled concepts was 18 kg or 0.73 kg/kWe for a single 25 kWe convertor. The minimum mass radiatively coupled concept was 41 kg or 1.64 kg/kWe. The direct conduction heat exchanger provides a lighter weight system because of its ability to operate the Stirling convertor evaporator at higher heat fluxes than those attainable by the radiatively coupled systems. Additionally the conductively coupled concepts had relatively small volumes and provide potentially simpler assembly. Their disadvantages were the tight tolerances and material joining problems associated with this refractory to superalloy interface. The advantages of the radiatively coupled designs were the minimal material interface problems.
Investigations of the structure and electromagnetic interactions of few body systems
NASA Astrophysics Data System (ADS)
Harper, E. P.; Lehman, D. R.; Prats, F.
The structure and electromagnetic interactions of few-body systems were investigated. The structural properties of the very light nuclei are examined by developing theoretical models that begin from the basic interactions between the constituents and that are solved exactly (numerically), i.e., full three- or four-body dynamics. Such models are then used in an attempt to understand the details of the strong and electromagnetic interactions of the few-nucleon nuclei after the basic underlying reaction mechanisms are understood with simpler models. Topics included: (1) set up the equations for the low-energy photodisintegration of (3)He and (3)H including final-state interactions and the E1 plus E2 operators; (2) develop a unified picture of the p + d (YIELDS) (3)He + (GAMMA), p + d (YIELDS) (3)He + (PI) (0), p + d (YIELDS) (3)H + (PI) (+) reactions at intermediate energies; (3) calculate the elastic and inelastic (1(+) (YIELDS) 0 (+)) form factors for (6)Li with three-body ((ALPHA)NN) wave functions; (4) calculate static properties (RMS radius, magnetic moment, and quadrupole moment) of (6)Li with three-body wave functions; and (5) develop the theory for the coincidence reactions (6)Li(p,2p)n(ALPHA), (6)Li(e,e'p)n(ALPHA), and (6)Li(e,e'd)(ALPHA).
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
Preliminary SP-100/Stirling heat exchanger designs
NASA Technical Reports Server (NTRS)
Schmitz, Paul; Tower, Leonard; Dawson, Ronald; Blue, Brian; Dunn, Pat
1993-01-01
Analytic modeling of several heat exchanger concepts to couple the SP-100 nuclear reactor primary lithium loop and the Space Stirling Power Convertor (SSPC) was performed. Four 25 kWe SSPC's are used to produce the required 100 kW of electrical power. This design work focused on the interface between a single SSPC and the primary lithium loop. Manifolding to separate and collect the four channel flow was not modeled. This work modeled two separate types of heat exchanger interfaces (conductive coupling and radiative coupling) to explore their relative advantages and disadvantages. The minimum mass design of the conductively coupled concepts was 18 kg or 0.73 kg/kWe for a single 25 kWe convertor. The minimum mass radiatively coupled concept was 41 kg or 1.64 kg/kWe. The direct conduction heat exchanger provides a lighter weight system because of its ability to operate the Stirling convertor evaporator at higher heat fluxes than those attainable by the radiatively coupled systems. Additionally the conductively coupled concepts had relatively small volumes and provide potentially simpler assembly. Their disadvantages were the tight tolerances and material joining problems associated with this refractory to superalloy interface. The advantages of the radiatively coupled designs were the minimal material interface problems.
NASA Astrophysics Data System (ADS)
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-01
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-07
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Complex and unexpected dynamics in simple genetic regulatory networks
NASA Astrophysics Data System (ADS)
Borg, Yanika; Ullner, Ekkehard; Alagha, Afnan; Alsaedi, Ahmed; Nesbeth, Darren; Zaikin, Alexey
2014-03-01
One aim of synthetic biology is to construct increasingly complex genetic networks from interconnected simpler ones to address challenges in medicine and biotechnology. However, as systems increase in size and complexity, emergent properties lead to unexpected and complex dynamics due to nonlinear and nonequilibrium properties from component interactions. We focus on four different studies of biological systems which exhibit complex and unexpected dynamics. Using simple synthetic genetic networks, small and large populations of phase-coupled quorum sensing repressilators, Goodwin oscillators, and bistable switches, we review how coupled and stochastic components can result in clustering, chaos, noise-induced coherence and speed-dependent decision making. A system of repressilators exhibits oscillations, limit cycles, steady states or chaos depending on the nature and strength of the coupling mechanism. In large repressilator networks, rich dynamics can also be exhibited, such as clustering and chaos. In populations of Goodwin oscillators, noise can induce coherent oscillations. In bistable systems, the speed with which incoming external signals reach steady state can bias the network towards particular attractors. These studies showcase the range of dynamical behavior that simple synthetic genetic networks can exhibit. In addition, they demonstrate the ability of mathematical modeling to analyze nonlinearity and inhomogeneity within these systems.
Pupil engineering for a confocal reflectance line-scanning microscope
NASA Astrophysics Data System (ADS)
Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.
2011-03-01
Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current confocal point-scanning systems are large, complex, and expensive. A confocal line-scanning microscope, utilizing a of linear array detector can be simpler, smaller, less expensive, and may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A line scanner may be implemented with a divided-pupil, half used for transmission and half for detection, or with a full-pupil using a beamsplitter. The premise is that a confocal line-scanner with either a divided-pupil or a full-pupil will provide high resolution and optical sectioning that would be competitive to that of the standard confocal point-scanner. We have developed a confocal line-scanner that combines both divided-pupil and full-pupil configurations. This combined-pupil prototype is being evaluated to determine the advantages and limitations of each configuration for imaging skin, and comparison of performance to that of commercially available standard confocal point-scanning microscopes. With the combined configuration, experimental evaluation of line spread functions (LSFs), contrast, signal-to-noise ratio, and imaging performance is in progress under identical optical and skin conditions. Experimental comparisons between divided-pupil and full-pupil LSFs will be used to determine imaging performance. Both results will be compared to theoretical calculations using our previously reported Fourier analysis model and to the confocal point spread function (PSF). These results may lead to a simpler class of confocal reflectance scanning microscopes for clinical and surgical dermatology.
A Numerical Study of the Effects of Curvature and Convergence on Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
A numerical study of the effects of curvature and convergence on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions
NASA Technical Reports Server (NTRS)
Shortt, Brian; Chutjian, Ara; Orient, Otto
2008-01-01
A method of making more-complex molecules from simpler ones has emerged as a by-product of an experimental study in outer-space atom/surface collision physics. The subject of the study was the formation of CO2 molecules as a result of impingement of O atoms at controlled kinetic energies upon cold surfaces onto which CO molecules had been adsorbed. In this study, the O/CO system served as a laboratory model, not only for the formation of CO2 but also for the formation of other compounds through impingement of rapidly moving atoms upon molecules adsorbed on such cold interstellar surfaces as those of dust grains or comets. By contributing to the formation of increasingly complex molecules, including organic ones, this study and related other studies may eventually contribute to understanding of the origins of life.
NASA Astrophysics Data System (ADS)
Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao
2017-02-01
The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
Introduction to biological complexity as a missing link in drug discovery.
Gintant, Gary A; George, Christopher H
2018-06-06
Despite a burgeoning knowledge of the intricacies and mechanisms responsible for human disease, technological advances in medicinal chemistry, and more efficient assays used for drug screening, it remains difficult to discover novel and effective pharmacologic therapies. Areas covered: By reference to the primary literature and concepts emerging from academic and industrial drug screening landscapes, the authors propose that this disconnect arises from the inability to scale and integrate responses from simpler model systems to outcomes from more complex and human-based biological systems. Expert opinion: Further collaborative efforts combining target-based and phenotypic-based screening along with systems-based pharmacology and informatics will be necessary to harness the technological breakthroughs of today to derive the novel drug candidates of tomorrow. New questions must be asked of enabling technologies-while recognizing inherent limitations-in a way that moves drug development forward. Attempts to integrate mechanistic and observational information acquired across multiple scales frequently expose the gap between our knowledge and our understanding as the level of complexity increases. We hope that the thoughts and actionable items highlighted will help to inform the directed evolution of the drug discovery process.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
Drinking Water Microbiome as a Screening Tool for ...
Many water utilities in the US using chloramine as disinfectant treatment in their distribution systems have experienced nitrification episodes, which detrimentally impact the water quality. A chloraminated drinking water distribution system (DWDS) simulator was operated through four successive operational schemes, including two stable events (SS) and an episode of nitrification (SF), followed by a ‘chlorine burn’ (SR) by switching disinfectant from chloramine to free chlorine. The current research investigated the viability of biological signatures as potential indicators of operational failure and predictors of nitrification in DWDS. For this purpose, we examined the bulk water (BW) bacterial microbiome of a chloraminated DWDS simulator operated through successive operational schemes, including an episode of nitrification. BW data was chosen because sampling of BW in a DWDS by water utility operators is relatively simpler and easier than collecting biofilm samples from underground pipes. The methodology applied a supervised classification machine learning approach (naïve Bayes algorithm) for developing predictive models for nitrification. Classification models were trained with biological datasets (Operational Taxonomic Unit [OTU] and genus-level taxonomic groups) generated using next generation high-throughput technology, and divided into two groups (i.e. binary) of positives and negatives (Failure and Stable, respectively). We also invest
Spatially explicit modeling in ecology: A review
DeAngelis, Donald L.; Yurek, Simeon
2017-01-01
The use of spatially explicit models (SEMs) in ecology has grown enormously in the past two decades. One major advancement has been that fine-scale details of landscapes, and of spatially dependent biological processes, such as dispersal and invasion, can now be simulated with great precision, due to improvements in computer technology. Many areas of modeling have shifted toward a focus on capturing these fine-scale details, to improve mechanistic understanding of ecosystems. However, spatially implicit models (SIMs) have played a dominant role in ecology, and arguments have been made that SIMs, which account for the effects of space without specifying spatial positions, have an advantage of being simpler and more broadly applicable, perhaps contributing more to understanding. We address this debate by comparing SEMs and SIMs in examples from the past few decades of modeling research. We argue that, although SIMs have been the dominant approach in the incorporation of space in theoretical ecology, SEMs have unique advantages for addressing pragmatic questions concerning species populations or communities in specific places, because local conditions, such as spatial heterogeneities, organism behaviors, and other contingencies, produce dynamics and patterns that usually cannot be incorporated into simpler SIMs. SEMs are also able to describe mechanisms at the local scale that can create amplifying positive feedbacks at that scale, creating emergent patterns at larger scales, and therefore are important to basic ecological theory. We review the use of SEMs at the level of populations, interacting populations, food webs, and ecosystems and argue that SEMs are not only essential in pragmatic issues, but must play a role in the understanding of causal relationships on landscapes.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Dynamics of Large Systems of Nonlinearly Evolving Units
NASA Astrophysics Data System (ADS)
Lu, Zhixin
The dynamics of large systems of many nonlinearly evolving units is a general research area that has great importance for many areas in science and technology, including biology, computation by artificial neural networks, statistical mechanics, flocking in animal groups, the dynamics of coupled neurons in the brain, and many others. While universal principles and techniques are largely lacking in this broad area of research, there is still one particular phenomenon that seems to be broadly applicable. In particular, this is the idea of emergence, by which is meant macroscopic behaviors that "emerge" from a large system of many "smaller or simpler entities such that...large entities" [i.e., macroscopic behaviors] arise which "exhibit properties the smaller/simpler entities do not exhibit." In this thesis we investigate mechanisms and manifestations of emergence in four dynamical systems consisting many nonlinearly evolving units. These four systems are as follows. (a) We first study the motion of a large ensemble of many noninteracting particles in a slowly changing Hamiltonian system that undergoes a separatrix crossing. In such systems, we find that separatrix-crossing induces a counterintuitive effect. Specifically, numerical simulation of two sets of densely sprinkled initial conditions on two energy curves appears to suggest that the two energy curves, one originally enclosing the other, seemingly interchange their positions. This, however, is topologically forbidden. We resolve this paradox by introducing a numerical simulation method we call "robust" and study its consequences. (b) We next study the collective dynamics of oscillatory pacemaker neurons in Suprachiasmatic Nucleus (SCN), which, through synchrony, govern the circadian rhythm of mammals. We start from a high-dimensional description of the many coupled oscillatory neuronal units within the SCN. This description is based on a forced Kuramoto model. We then reduce the system dimensionality by using the Ott Antonsen Ansatz and obtain a low-dimensional macroscopic description. Using this reduced macroscopic system, we explain the east-west asymmetry of jet-lag recovery and discus the consequences of our findings. (c) Thirdly, we study neuron firing in integrate-and-fire neural networks. We build a discrete-state/discrete-time model with both excitatory and inhibitory neurons and find a phase transition between avalanching dynamics and ceaseless firing dynamics. Power-law firing avalanche size/duration distributions are observed at critical parameter values. Furthermore, in this critical regime we find the same power law exponents as those observed from experiments and previous, more restricted, simulation studies. We also employ a mean-field method and show that inhibitory neurons in this system promote robustness of the criticality (i.e., an enhanced range of system parameter where power-law avalanche statistics applies). (d) Lastly, we study the dynamics of "reservoir computing networks" (RCN's), which is a recurrent neural network (RNN) scheme for machine learning. The advantage of RCN's over traditional RNN's is that the training is done only on the output layer, usually via a simple least-square method. We show that RCN's are very effective for inferring unmeasured state variables of dynamical systems whose system state is only partially measured. Using the examples of the Lorenz system and the Rossler system we demonstrate the potential of an RCN to perform as an universal model-free "observer".
NASA Technical Reports Server (NTRS)
Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.
2004-01-01
Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Asymptotic Stability of Interconnected Passive Non-Linear Systems
NASA Technical Reports Server (NTRS)
Isidori, A.; Joshi, S. M.; Kelkar, A. G.
1999-01-01
This paper addresses the problem of stabilization of a class of internally passive non-linear time-invariant dynamic systems. A class of non-linear marginally strictly passive (MSP) systems is defined, which is less restrictive than input-strictly passive systems. It is shown that the interconnection of a non-linear passive system and a non-linear MSP system is globally asymptotically stable. The result generalizes and weakens the conditions of the passivity theorem, which requires one of the systems to be input-strictly passive. In the case of linear time-invariant systems, it is shown that the MSP property is equivalent to the marginally strictly positive real (MSPR) property, which is much simpler to check.
Hybrid slab-microchannel gel electrophoresis system
Balch, J.W.; Carrano, A.V.; Davidson, J.C.; Koo, J.C.
1998-05-05
A hybrid slab-microchannel gel electrophoresis system is described. The hybrid system permits the fabrication of isolated microchannels for biomolecule separations without imposing the constraint of a totally sealed system. The hybrid system is reusable and ultimately much simpler and less costly to manufacture than a closed channel plate system. The hybrid system incorporates a microslab portion of the separation medium above the microchannels, thus at least substantially reducing the possibility of non-uniform field distribution and breakdown due to uncontrollable leakage. A microslab of the sieving matrix is built into the system by using plastic spacer materials and is used to uniformly couple the top plate with the bottom microchannel plate. 4 figs.
Combining joint models for biomedical event extraction
2012-01-01
Background We explore techniques for performing model combination between the UMass and Stanford biomedical event extraction systems. Both sub-components address event extraction as a structured prediction problem, and use dual decomposition (UMass) and parsing algorithms (Stanford) to find the best scoring event structure. Our primary focus is on stacking where the predictions from the Stanford system are used as features in the UMass system. For comparison, we look at simpler model combination techniques such as intersection and union which require only the outputs from each system and combine them directly. Results First, we find that stacking substantially improves performance while intersection and union provide no significant benefits. Second, we investigate the graph properties of event structures and their impact on the combination of our systems. Finally, we trace the origins of events proposed by the stacked model to determine the role each system plays in different components of the output. We learn that, while stacking can propose novel event structures not seen in either base model, these events have extremely low precision. Removing these novel events improves our already state-of-the-art F1 to 56.6% on the test set of Genia (Task 1). Overall, the combined system formed via stacking ("FAUST") performed well in the BioNLP 2011 shared task. The FAUST system obtained 1st place in three out of four tasks: 1st place in Genia Task 1 (56.0% F1) and Task 2 (53.9%), 2nd place in the Epigenetics and Post-translational Modifications track (35.0%), and 1st place in the Infectious Diseases track (55.6%). Conclusion We present a state-of-the-art event extraction system that relies on the strengths of structured prediction and model combination through stacking. Akin to results on other tasks, stacking outperforms intersection and union and leads to very strong results. The utility of model combination hinges on complementary views of the data, and we show that our sub-systems capture different graph properties of event structures. Finally, by removing low precision novel events, we show that performance from stacking can be further improved. PMID:22759463
Drosophila as an In Vivo Model for Human Neurodegenerative Disease
McGurk, Leeanne; Berson, Amit; Bonini, Nancy M.
2015-01-01
With the increase in the ageing population, neurodegenerative disease is devastating to families and poses a huge burden on society. The brain and spinal cord are extraordinarily complex: they consist of a highly organized network of neuronal and support cells that communicate in a highly specialized manner. One approach to tackling problems of such complexity is to address the scientific questions in simpler, yet analogous, systems. The fruit fly, Drosophila melanogaster, has been proven tremendously valuable as a model organism, enabling many major discoveries in neuroscientific disease research. The plethora of genetic tools available in Drosophila allows for exquisite targeted manipulation of the genome. Due to its relatively short lifespan, complex questions of brain function can be addressed more rapidly than in other model organisms, such as the mouse. Here we discuss features of the fly as a model for human neurodegenerative disease. There are many distinct fly models for a range of neurodegenerative diseases; we focus on select studies from models of polyglutamine disease and amyotrophic lateral sclerosis that illustrate the type and range of insights that can be gleaned. In discussion of these models, we underscore strengths of the fly in providing understanding into mechanisms and pathways, as a foundation for translational and therapeutic research. PMID:26447127
Development of a finite element model of the middle ear.
Williams, K R; Blayney, A W; Rice, H J
1996-01-01
A representative finite element model of the healthy ear is developed commencing with a description of the decoupled isotropic tympanic membrane. This model was shown to vibrate in a manner similar to that found both numerically (1, 2) and experimentally (8). The introduction of a fibre system into the membrane matrix significantly altered the modes of vibration. The first mode "remains as a piston like movement as for the isotropic membrane. However, higher modes show a simpler vibration pattern similar to the second mode but with a varying axis of movement and lower amplitudes. The introduction of a malleus and incus does not change the natural frequencies or mode shapes of the membrane for certain support conditions. When constraints are imposed along the ossicular chain by simulation of a cochlear impedance term then significantly altered modes can occur. More recently a revised model of the ear has been developed by the inclusion of the outer ear canal. This discretisation uses geometries extracted from a Nuclear Magnetic resonance scan of a healthy subject and a crude inner ear model using stiffness parameters ultimately fixed through a parameter tuning process. The subsequently tuned model showed behaviour consistent with previous findings and should provide a good basis for subsequent modelling of diseased ears and assessment of the performance of middle ear prostheses.
Drosophila as an In Vivo Model for Human Neurodegenerative Disease.
McGurk, Leeanne; Berson, Amit; Bonini, Nancy M
2015-10-01
With the increase in the ageing population, neurodegenerative disease is devastating to families and poses a huge burden on society. The brain and spinal cord are extraordinarily complex: they consist of a highly organized network of neuronal and support cells that communicate in a highly specialized manner. One approach to tackling problems of such complexity is to address the scientific questions in simpler, yet analogous, systems. The fruit fly, Drosophila melanogaster, has been proven tremendously valuable as a model organism, enabling many major discoveries in neuroscientific disease research. The plethora of genetic tools available in Drosophila allows for exquisite targeted manipulation of the genome. Due to its relatively short lifespan, complex questions of brain function can be addressed more rapidly than in other model organisms, such as the mouse. Here we discuss features of the fly as a model for human neurodegenerative disease. There are many distinct fly models for a range of neurodegenerative diseases; we focus on select studies from models of polyglutamine disease and amyotrophic lateral sclerosis that illustrate the type and range of insights that can be gleaned. In discussion of these models, we underscore strengths of the fly in providing understanding into mechanisms and pathways, as a foundation for translational and therapeutic research. Copyright © 2015 by the Genetics Society of America.
Data assimilation experiments using the diffusive back and forth nudging for the NEMO ocean model
NASA Astrophysics Data System (ADS)
Ruggiero, G. A.; Ourmières, Y.; Cosme, E.; Blum, J.; Auroux, D.; Verron, J.
2014-07-01
The Diffusive Back and Forth Nudging (DBFN) is an easy-to-implement iterative data assimilation method based on the well-known Nudging method. It consists in a sequence of forward and backward model integrations, within a given time window, both of them using a feedback term to the observations. Therefore in the DBFN, the Nudging asymptotic behavior is translated into an infinite number of iterations within a bounded time domain. In this method, the backward integration is carried out thanks to what is called backward model, which is basically the forward model with reversed time step sign. To maintain numeral stability the diffusion terms also have their sign reversed, giving a diffusive character to the algorithm. In this article the DBFN performance to control a primitive equation ocean model is investigated. In this kind of model non-resolved scales are modeled by diffusion operators which dissipate energy that cascade from large to small scales. Thus, in this article the DBFN approximations and their consequences on the data assimilation system set-up are analyzed. Our main result is that the DBFN may provide results which are comparable to those produced by a 4Dvar implementation with a much simpler implementation and a shorter CPU time for convergence.
Comparative effectiveness of the SNaP™ Wound Care System.
Hutton, David W; Sheehan, Peter
2011-04-01
Diabetic lower extremity wounds cause substantial burden to healthcare systems, costing tens of thousands of dollars per episode. Negative pressure wound therapy (NPWT) devices have been shown to be cost-effective at treating these wounds, but the traditional devices use bulky electrical pumps that require a durable medical equipment rental-based procurement process. The Spiracur SNaP™ Wound Care System is an ultraportable NPWT system that does not use an electric pump and is fully disposable. It has superior healing compared to standard of care with modern dressings and comparable healing to traditional NPWT devices while giving patients greater mobility and giving clinicians a simpler procurement process. We used a mathematical model to analyse the costs of the SNaP™ system and compare them to standard of care and electrically powered NPWT devices. When compared to standard of care, the SNaP™ system saves over $9000 per wound treated and more than doubles the number of patients healed. The SNaP system has similar healing time to powered NPWT devices, but saves $2300 in Medicare payments or $2800 for private payers per wound treated. Our analysis shows that the SNaP™ system could save substantial treatment costs in addition to allowing patients greater freedom and mobility. © 2011 The Authors. © 2011 Blackwell Publishing Ltd and Medicalhelplines.com Inc.
Barlow, P.M.
1994-01-01
Steady-state, two-and three-dimensional, ground-water flow models coupled with a particle- tracking program were evaluated to determine their effectiveness in delineating contributing areas of existing and hypothetical public-supply wells pumping from two contrasting stratified-drift aquifers of Cape Cod, Mass. Several of the contri- buting areas delineated by use of the three- dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of a two-dimensional analytical and numerical modeling techniques, include dis- continuous areas of the water table, and do not surround the wells. Because two-dimensional areal models do not account for vertical flow, they cannot adequately represent many of the hydro- geologic and well-design variables that were shown to complicate the delineation of contributing areas in these flow systems, including the presence of discrete lenses of 1ow hydraulic conductivity, large ratios of horizontal to ver- tical hydraulic conductivity, shallow streams, partially penetrating supply wells, and 1ow pumping rates (less than 0.1 million gallons per day). Nevertheless, contributing areas delineated for two wells in the simpler of the two flow systems--a thin (less than 100 feet), single- layer, uniform aquifer with near-ideal boundary conditions--were not significantly different for the two- or three-dimensional models of the natural system, for a pumping rate of 0.5 million gallons per day. Use of particle tracking helped identify the source of water to simulated wells, which included precipitation recharge, wastewater return flow, and pond water. Pond water and wastewater return flow accounted for as much as 73 and 40 percent, respectively, of the water captured by simulated wells.
NASA Technical Reports Server (NTRS)
Wolf, M.
1982-01-01
The historical progression of efficiency improvements, cost reductions, and performance improvements in modules and photovoltaic systems are described. The potential for future improvements in photovoltaic device efficiencies and cost reductions continues as device concepts, designs, processes, and automated production capabilities mature. Additional step-function improvements can be made as today's simpler devices are replaced by more sophisticated devices.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Integrating 3D geological information with a national physically-based hydrological modelling system
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark
2016-04-01
Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE < 0.5) are located in the chalk in the south east of England. As such, the British Geological Survey 3D geology model for Great Britain (GB3D) has been incorporated, for the first time in any hydrological model, to pave the way for improvements to be made to simulations of catchments with important groundwater regimes. This coupling has involved development of software to allow for easy incorporation of geological information into SHETRAN for any model setup. The addition of more realistic subsurface representation following this approach is shown to greatly improve model performance in areas dominated by groundwater processes. The resulting modelling system has great potential to be used as a resource at national, regional and local scales in an array of different applications, including climate change impact assessments, land cover change studies and integrated assessments of groundwater and surface water resources.
Tawhai, Merryn H; Bates, Jason H T
2011-05-01
Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.
A computationally tractable version of the collective model
NASA Astrophysics Data System (ADS)
Rowe, D. J.
2004-05-01
A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.
Computer-automated opponent for manned air-to-air combat simulations
NASA Technical Reports Server (NTRS)
Hankins, W. W., III
1979-01-01
Two versions of a real-time digital-computer program that operates a fighter airplane interactively against a human pilot in simulated air combat were evaluated. They function by replacing one of two pilots in the Langley differential maneuvering simulator. Both versions make maneuvering decisions from identical information and logic; they differ essentially in the aerodynamic models that they control. One is very complete, but the other is much simpler, primarily characterizing the airplane's performance (lift, drag, and thrust). Both models competed extremely well against highly trained U.S. fighter pilots.
Numerical study of combustion processes in afterburners
NASA Technical Reports Server (NTRS)
Zhou, Xiaoqing; Zhang, Xiaochun
1986-01-01
Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Study on a novel laser target detection system based on software radio technique
NASA Astrophysics Data System (ADS)
Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui
2008-12-01
This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.
Cooling and Trapping of Neutral Atoms
2009-04-30
Schrodinger equation in which the absence of the rotating wave approximation accounts for the two frequencies [18]. This result can be described in...depict this energy conservation process is the Jaynes - Cummings view, where the light field can be described as a number state. Then it becomes clear...of the problem under consideration. Find a suitable approximation for the normal modes; the simpler, the better. Decide how to model the light
Cognitive Complexity, Attitudinal Affect, and Dispersion in Affect Ratings for Products.
Durand, Richard M
1979-04-01
The purpose of this study was to examine the relationships between cognitive complexity, attitudinal affect, and dispersion of affect scores (N = 102 male business administration undergraduates). Models of automobiles and toothpaste brands were the content domains studied. Analysis using Pearson product-moment correlation supported the hypothesis that cognitive complex Ss had a lower level of affect and greater dispersion of affect scores than did simpler Ss.
Crustal deformation in Great California Earthquake cycles
NASA Technical Reports Server (NTRS)
Li, Victor C.; Rice, James R.
1987-01-01
A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.
NASA Technical Reports Server (NTRS)
Stieglitz, Marc; Ducharne, Agnes; Koster, Randy; Suarez, Max; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
The three-layer snow model is coupled to the global catchment-based Land Surface Model (LSM) of the NASA Seasonal to Interannual Prediction Project (NSIPP) project, and the combined models are used to simulate the growth and ablation of snow cover over the North American continent for the period 1987-1988. The various snow processes included in the three-layer model, such as snow melting and re-freezing, dynamic changes in snow density, and snow insulating properties, are shown (through a comparison with the corresponding simulation using a much simpler snow model) to lead to an improved simulation of ground thermodynamics on the continental scale.
Ochoa-Gondar, O; Vila-Corcoles, A; Rodriguez-Blanco, T; Hospital, I; Salsench, E; Ansa, X; Saun, N
2014-04-01
This study compares the ability of two simpler severity rules (classical CRB65 vs. proposed CORB75) in predicting short-term mortality in elderly patients with community-acquired pneumonia (CAP). A population-based study was undertaken involving 610 patients ≥ 65 years old with radiographically confirmed CAP diagnosed between 2008 and 2011 in Tarragona, Spain (350 cases in the derivation cohort, 260 cases in the validation cohort). Severity rules were calculated at the time of diagnosis, and 30-day mortality was considered as the dependent variable. The area under the receiver operating characteristic curves (AUC) was used to compare the discriminative power of the severity rules. Eighty deaths (46 in the derivation and 34 in the validation cohorts) were observed, which gives a mortality rate of 13.1 % (15.6 % for hospitalized and 3.3 % for outpatient cases). After multivariable analyses, besides CRB (confusion, respiration rate ≥ 30/min, systolic blood pressure <90 mmHg or diastolic ≤ 60 mmHg), peripheral oxygen saturation (≤ 90 %) and age ≥ 75 years appeared to be associated with increasing 30-day mortality in the derivation cohort. The model showed adequate calibration for the derivation and validation cohorts. A modified CORB75 scoring system (similar to the classical CRB65, but adding oxygen saturation and increasing the age to 75 years) was constructed. The AUC statistics for predicting mortality in the derivation and validation cohorts were 0.79 and 0.82, respectively. In the derivation cohort, a CORB75 score ≥ 2 showed 78.3 % sensitivity and 65.5 % specificity for mortality (in the validation cohort, these were 82.4 and 71.7 %, respectively). The proposed CORB75 scoring system has good discriminative power in predicting short-term mortality among elderly people with CAP, which supports its use for severity assessment of these patients in primary care.
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
NASA Astrophysics Data System (ADS)
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Heat Pipe Vapor Dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Issacci, Farrokh
1990-01-01
The dynamic behavior of the vapor flow in heat pipes is investigated at startup and during operational transients. The vapor is modeled as two-dimensional, compressible viscous flow in an enclosure with inflow and outflow boundary conditions. For steady-state and operating transients, the SIMPLER method is used. In this method a control volume approach is employed on a staggered grid which makes the scheme very stable. It is shown that for relatively low input heat fluxes the compressibility of the vapor flow is low and the SIMPLER scheme is suitable for the study of transient vapor dynamics. When the input heat flux is high or the process under a startup operation starts at very low pressures and temperatures, the vapor is highly compressible and a shock wave is created in the evaporator. It is shown that for a wide range of input heat fluxes, the standard methods, including the SIMPLER scheme, are not suitable. A nonlinear filtering technique, along with the centered difference scheme, are then used for shock capturing as well as for the solution of the cell Reynolds-number problem. For high heat flux, the startup transient phase involves multiple shock reflections in the evaporator region. Each shock reflection causes a significant increase in the local pressure and a large pressure drop along the heat pipe. Furthermore, shock reflections cause flow reversal in the evaporation region and flow circulations in the adiabatic region. The maximum and maximum-averaged pressure drops in different sections of the heat pipe oscillate periodically with time because of multiple shock reflections. The pressure drop converges to a constant value at steady state. However, it is significantly higher than its steady-state value at the initiation of the startup transient. The time for the vapor core to reach steady-state condition depends on the input heat flux, the heat pipe geometry, the working fluid, and the condenser conditions. However, the vapor transient time, for an Na-filled heat pipe is on the order of seconds. Depending on the time constant for the overall system, the vapor transient time may be very short. Therefore, the vapor core may be assumed to be quasi-steady in the transient analysis of a heat pipe operation.
A framework for qualitative reasoning about solid objects
NASA Technical Reports Server (NTRS)
Davis, E.
1987-01-01
Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.
Was there a universal tRNA before specialized tRNAs came into existence?
NASA Technical Reports Server (NTRS)
Lacey, James C., Jr.; Staves, Mark P.
1990-01-01
It is generally true that evolving systems begin simply and become more complex in the evolutionary process. For those who try to understand the origin of a biochemical system, what is required is the development of an idea as to what simpler system preceded the present one. A hypothesis is presented that a universal tRNA molecule, capable of reading many codons, may have preceded the appearance of individual tRNAs. Evidence seems to suggest that this molecule may have been derived from a common ancestor of the contemporary 5S rRNAs and tRNAs.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Optimal hydraulic design of new-type shaft tubular pumping system
NASA Astrophysics Data System (ADS)
Zhu, H. G.; Zhang, R. T.; Zhou, J. R.
2012-11-01
Based on the characteristics of large flow rate, low-head, short annual operation time and high reliability of city flood-control pumping stations, a new-type shaft tubular pumping system featuring shaft suction box, siphon-type discharge passage with vacuum breaker as cutoff device was put forward, which possesses such advantages as simpler structure, reliable cutoff and higher energy performance. According to the design parameters of a city flood control pumping station, a numerical computation model was set up including shaft-type suction box, siphon-type discharge passage, pump impeller and guide vanes. By using commercial CFD software Fluent, RNG κ-epsilon turbulence model was adopted to close the three-dimensional time-averaged incompressible N-S equations. After completing optimal hydraulic design of shaft-type suction box, and keeping the parameters of total length, maximum width and outlet section unchanged, siphon-type discharge passages of three hump locations and three hump heights were designed and numerical analysis on the 9 hydraulic design schemes of pumping system were proceeded. The computational results show that the changing of hump locations and hump heights directly affects the internal flow patterns of discharge passages and hydraulic performances of the system, and when hump is located 3.66D from the inlet section and hump height is about 0.65D (D is the diameter of pump impeller), the new-type shaft tubular pumping system achieves better energy performances. A pumping system model test of the optimal designed scheme was carried out. The result shows that the highest pumping system efficiency reaches 75.96%, and when at design head of 1.15m the flow rate and system efficiency were 0.304m3/s and 63.10%, respectively. Thus, the validity of optimal design method was verified by the model test, and a solid foundation was laid for the application and extension of the new-type shaft tubular pumping system.
ASSESSING THE INFLUENCE OF THE SOLAR ORBIT ON TERRESTRIAL BIODIVERSITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, F.; Bailer-Jones, C. A. L.
The terrestrial record shows a significant variation in the extinction and origination rates of species during the past half-billion years. Numerous studies have claimed an association between this variation and the motion of the Sun around the Galaxy, invoking the modulation of cosmic rays, gamma rays, and comet impact frequency as a cause of this biodiversity variation. However, some of these studies exhibit methodological problems, or were based on coarse assumptions (such as a strict periodicity of the solar orbit). Here we investigate this link in more detail, using a model of the Galaxy to reconstruct the solar orbit andmore » thus a predictive model of the temporal variation of the extinction rate due to astronomical mechanisms. We compare these predictions as well as those of various reference models with paleontological data. Our approach involves Bayesian model comparison, which takes into account the uncertainties in the paleontological data as well as the distribution of solar orbits consistent with the uncertainties in the astronomical data. We find that various versions of the orbital model are not favored beyond simpler reference models. In particular, the distribution of mass extinction events can be explained just as well by a uniform random distribution as by any other model tested. Although our negative results on the orbital model are robust to changes in the Galaxy model, the Sun's coordinates, and the errors in the data, we also find that it would be very difficult to positively identify the orbital model even if it were the true one. (In contrast, we do find evidence against simpler periodic models.) Thus, while we cannot rule out there being some connection between solar motion and biodiversity variations on the Earth, we conclude that it is difficult to give convincing positive conclusions of such a connection using current data.« less
Graff, Mario; Poli, Riccardo; Flores, Juan J
2013-01-01
Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.
An Observer's View of the ORAC System at UKIRT
NASA Astrophysics Data System (ADS)
Wright, G. S.; Bridger, A. B.; Pickup, D. A.; Tan, M.; Folger, M.; Economou, F.; Adamson, A. J.; Currie, M. J.; Rees, N. P.; Purves, M.; Kackley, R. D.
The Observatory Reduction and Acquisition Control system (ORAC) was commissioned with its first instrument at the UK Infrared Telescope (UKIRT) in October 1999, and with all of the other UKIRT instrumentation this year. ORAC's advance preparation Observing Tool makes it simpler to prepare and carry out observations. Its Observing Manager gives observers excellent feedback on their observing as it goes along, reducing wasted time. The ORAC pipelined Data Reduction system produces near-publication quality reduced data at the telescope. ORAC is now in use for all observing at UKIRT, including flexibly scheduled nights and service observing. This paper provides an observer's perspective of the system and its performance.
About the mechanism of ERP-system pilot test
NASA Astrophysics Data System (ADS)
Mitkov, V. V.; Zimin, V. V.
2018-05-01
In the paper the mathematical problem of defining the scope of pilot test is stated, which is a task of quadratic programming. The procedure of the problem solving includes the method of network programming based on the structurally similar network representation of the criterion and constraints and which reduces the original problem to a sequence of simpler evaluation tasks. The evaluation tasks are solved by the method of dichotomous programming.
Handling Quality Requirements for Advanced Aircraft Design: Longitudinal Mode
1979-08-01
phases of air -to- air combat, for example). This is far simpler than the general problem of control law definition. How- ever, the results of such...unlimited. Ali FORCE FUGHT DYNAMICS LABORATORYAIR FORCE WRIGHT AERONAUTICALLABORATORIES AIR FORCE SYSTEMS COMMANDI * WRIGHT-PATITERSON AIR FORCE BASE...not necessarily shared by the Air Force. Brian. W. VauVliet Project Engineer S Rorad0. Anderson, Chief Control Dynamics Branch Flight Control Division
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnich, Glenn; Troessaert, Cedric
2009-04-15
In the reduced phase space of electromagnetism, the generator of duality rotations in the usual Poisson bracket is shown to generate Maxwell's equations in a second, much simpler Poisson bracket. This gives rise to a hierarchy of bi-Hamiltonian evolution equations in the standard way. The result can be extended to linearized Yang-Mills theory, linearized gravity, and massless higher spin gauge fields.
Searching for simplicity in the analysis of neurons and behavior
Stephens, Greg J.; Osborne, Leslie C.; Bialek, William
2011-01-01
What fascinates us about animal behavior is its richness and complexity, but understanding behavior and its neural basis requires a simpler description. Traditionally, simplification has been imposed by training animals to engage in a limited set of behaviors, by hand scoring behaviors into discrete classes, or by limiting the sensory experience of the organism. An alternative is to ask whether we can search through the dynamics of natural behaviors to find explicit evidence that these behaviors are simpler than they might have been. We review two mathematical approaches to simplification, dimensionality reduction and the maximum entropy method, and we draw on examples from different levels of biological organization, from the crawling behavior of Caenorhabditis elegans to the control of smooth pursuit eye movements in primates, and from the coding of natural scenes by networks of neurons in the retina to the rules of English spelling. In each case, we argue that the explicit search for simplicity uncovers new and unexpected features of the biological system and that the evidence for simplification gives us a language with which to phrase new questions for the next generation of experiments. The fact that similar mathematical structures succeed in taming the complexity of very different biological systems hints that there is something more general to be discovered. PMID:21383186
USDA-ARS?s Scientific Manuscript database
As baits, fermented food products are generally attractive to many types of insects, making it difficult to sort through nontarget insects to monitor a pest species of interest. We test the hypothesis that a chemically simpler and more defined attractant developed for a target insect is more specifi...
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, T.
The exact equivalence between a bad-cavity laser with modulated inversion and a nonlinear oscillator in a Toda potential driven by an external modulation is presented. The dynamical properties of the laser system are investigated in detail by analyzing a Toda oscillator system. The temporal characteristics of the bad-cavity laser under strong modulation are analyzed extensively by numerically investigating the simpler Toda system as a function of two control parameters: the dc component of the population inversion and the modulation amplitude. The system exhibits two kinds of optical chaos: One is the quasiperiodic chaos in the region of the intermediate modulationmore » amplitude and the other is the intermittent kicked chaos in the region of strong modulation and large dc component of the pumping. The former is well described by a one-dimensional discrete map with a singular invariant probability measure. There are two types of onset of the chaos: quasiperiodic instability (continuous path to chaos) and catastrophic crisis (discontinuous path). The period-doubling cascade of bifurcation is also observed. The simple discrete model of the Toda system is presented to obtain analytically the one-dimensional map function and to understand the effect of the asymmetric potential curvature on yielding chaos.« less
Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie
2018-02-13
Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Polarimetry Of Planetary Atmospheres: From The Solar System Gas Giants To Extrasolar Planets
NASA Astrophysics Data System (ADS)
Buenzli, Esther; Bazzon, A.; Schmid, H. M.
2011-09-01
The polarization of light reflected from a planet provides unique information on the atmosphere structure and scattering properties of particles in the upper atmosphere. The solar system planets show a large variety of atmospheric polarization properties, from the thick, highly polarizing haze on Titan and the poles of Jupiter, Rayleigh scattering by molecules on Uranus and Neptune, to clouds in the equatorial region of Jupiter or on Venus. Polarimetry is also a promising differential technique to search for and characterize extra-solar planets, e.g. with the future VLT planet finder instrument SPHERE. For the preparation of the SPHERE planet search program we have made a suite of polarimetric observations and models for the solar system gas giants. The phase angles for the outer planets are small for Earth bound observations and the integrated polarization is essentially zero due to the symmetric backscattering situation. However, a second order scattering effect produces a measurable limb polarization for resolved planetary disks. We have made a detailed model for the spectropolarimetric signal of the limb polarization of Uranus between 520 and 935 nm to derive scattering properties of haze and cloud particles and to predict the polarization signal from an extra-solar point of view. We are also investigating imaging polarimetry of the thick haze layers on Titan and the poles of Jupiter. Additionally, we have calculated a large grid of intensity and polarization phase curves for simpler atmosphere models of extrasolar planets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Shekhar; Koganti, S.B.
2008-07-01
Acetohydroxamic acid (AHA) is a novel complexant for recycle of nuclear-fuel materials. It can be used in ordinary centrifugal extractors, eliminating the need for electro-redox equipment or complex maintenance requirements in a remotely maintained hot cell. In this work, the effect of AHA on Pu(IV) distribution ratios in 30% TBP system was quantified, modeled, and integrated in SIMPSEX code. Two sets of batch experiments involving macro Pu concentrations (conducted at IGCAR) and one high-Pu flowsheet (literature) were simulated for AHA based U-Pu separation. Based on the simulation and validation results, AHA based next-generation reprocessing flowsheets are proposed for co-processing basedmore » FBR and thermal-fuel reprocessing as well as evaporator-less macro-level Pu concentration process required for MOX fuel fabrication. Utilization of AHA results in significant simplification in plant design and simpler technology implementations with significant cost savings. (authors)« less
Structure at every scale: A semantic network account of the similarities between unrelated concepts.
De Deyne, Simon; Navarro, Daniel J; Perfors, Amy; Storms, Gert
2016-09-01
Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Advancing Technology for Starlight Suppression via an External Occulter
NASA Technical Reports Server (NTRS)
Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.;
2011-01-01
External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.
Interaction in Balanced Cross Nested Designs
NASA Astrophysics Data System (ADS)
Ramos, Paulo; Mexia, João T.; Carvalho, Francisco; Covas, Ricardo
2011-09-01
Commutative Jordan Algebras, CJA, are used in the study of mixed models obtained, through crossing and nesting, from simpler ones. In the study of cross nested models the interaction between nested factors have been systematically discarded. However this can constitutes an artificial simplification of the models. We point out that, when two crossed factors interact, such interaction is symmetric, both factors playing in it equivalent roles, while when two nested factors interact, the interaction is determined by the nesting factor. These interactions will be called interactions with nesting. In this work we present a coherent formulation of the algebraic structure of models enabling the choice of families of interactions between cross and nested factors using binary operations on CJA.
The algorithmic anatomy of model-based evaluation
Daw, Nathaniel D.; Dayan, Peter
2014-01-01
Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820
Theoretical and Experimental Analysis of an Evolutionary Social-Learning Game
2012-01-13
Nettle outlines the circumstances in which verbal communication is evolutionarily adaptive, and why few species have developed the ability to use...language despite its apparent advantages [28]. Nettle uses a significantly simpler model than the Cultaptation game, but provides insight that may be useful...provided by Kearns et al. was designed as an online algorithm, so it only returns the near-optimal action for the state at the root of the search tree
Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation
2009-09-01
high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by
Alexiadis, Orestis; Daoulas, Kostas Ch; Mavrantzas, Vlasis G
2008-01-31
A new Monte Carlo algorithm is presented for the simulation of atomistically detailed alkanethiol self-assembled monolayers (R-SH) on a Au(111) surface. Built on a set of simpler but also more complex (sometimes nonphysical) moves, the new algorithm is capable of efficiently driving all alkanethiol molecules to the Au(111) surface, thereby leading to full surface coverage, irrespective of the initial setup of the system. This circumvents a significant limitation of previous methods in which the simulations typically started from optimally packed structures on the substrate close to thermal equilibrium. Further, by considering an extended ensemble of configurations each one of which corresponds to a different value of the sulfur-sulfur repulsive core potential, sigmass, and by allowing for configurations to swap between systems characterized by different sigmass values, the new algorithm can adequately simulate model R-SH/Au(111) systems for values of sigmass ranging from 4.25 A corresponding to the Hautman-Klein molecular model (J. Chem. Phys. 1989, 91, 4994; 1990, 93, 7483) to 4.97 A corresponding to the Siepmann-McDonald model (Langmuir 1993, 9, 2351), and practically any chain length. Detailed results are presented quantifying the efficiency and robustness of the new method. Representative simulation data for the dependence of the structural and conformational properties of the formed monolayer on the details of the employed molecular model are reported and discussed; an investigation of the variation of molecular organization and ordering on the Au(111) substrate for three CH3-(CH2)n-SH/Au(111) systems with n=9, 15, and 21 is also included.
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
NASA Technical Reports Server (NTRS)
Zimanyi, L.; Lanyi, J. K.
1993-01-01
The bacteriorhodopsin photocycle contains more than five spectrally distinct intermediates, and the complexity of their interconversions has precluded a rigorous solution of the kinetics. A representation of the photocycle of mutated D96N bacteriorhodopsin near neutral pH was given earlier (Varo, G., and J. K. Lanyi. 1991. Biochemistry. 30:5008-5015) as BRhv-->K<==>L<==>M1-->M2--> BR. Here we have reduced a set of time-resolved difference spectra for this simpler system to three base spectra, each assumed to consist of an unknown mixture of the pure K, L, and M difference spectra represented by a 3 x 3 matrix of concentration values between 0 and 1. After generating all allowed sets of spectra for K, L, and M (i.e., M1 + M2) at a 1:50 resolution of the matrix elements, invalid solutions were eliminated progressively in a search based on what is expected, empirically and from the theory of polyene excited states, for rhodopsin spectra. Significantly, the average matrix values changed little after the first and simplest of the search criteria that disallowed negative absorptions and more than one maximum for the M intermediate. We conclude from the statistics that during the search the solutions strongly converged into a narrow region of the multidimensional space of the concentration matrix. The data at three temperatures between 5 and 25 degrees C yielded a single set of spectra for K, L, and M; their fits are consistent with the earlier derived photocycle model for the D96N protein.
Testing the uniqueness of mass models using gravitational lensing
NASA Astrophysics Data System (ADS)
Walls, Levi; Williams, Liliya L. R.
2018-06-01
The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.
Feedback linearization of singularly perturbed systems based on canonical similarity transformations
NASA Astrophysics Data System (ADS)
Kabanov, A. A.
2018-05-01
This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.
Method for simulating discontinuous physical systems
Baty, Roy S.; Vaughn, Mark R.
2001-01-01
The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.
Consequences of stage-structured predators: cannibalism, behavioral effects, and trophic cascades.
Rudolf, Volker H W
2007-12-01
Cannibalistic and asymmetrical behavioral interactions between stages are common within stage-structured predator populations. Such direct interactions between predator stages can result in density- and trait-mediated indirect interactions between a predator and its prey. A set of structured predator-prey models is used to explore how such indirect interactions affect the dynamics and structure of communities. Analyses of the separate and combined effects of stage-structured cannibalism and behavior-mediated avoidance of cannibals under different ecological scenarios show that both cannibalism and behavioral avoidance of cannibalism can result in short- and long-term positive indirect connections between predator stages and the prey, including "apparent mutualism." These positive interactions alter the strength of trophic cascades such that the system's dynamics are determined by the interaction between bottom-up and top-down effects. Contrary to the expectation of simpler models, enrichment increases both predator and prey abundance in systems with cannibalism or behavioral avoidance of cannibalism. The effect of behavioral avoidance of cannibalism, however, depends on how strongly it affects the maturation rate of the predator. Behavioral interactions between predator stages reduce the short-term positive effect of cannibalism on the prey density, but can enhance its positive long-term effects. Both interaction types reduce the destabilizing effect of enrichment. These results suggest that inconsistencies between data and simple models can be resolved by accounting for stage-structured interactions within and among species.
NASA Astrophysics Data System (ADS)
Sandfeld, Stefan; Budrikis, Zoe; Zapperi, Stefano; Fernandez Castellanos, David
2015-02-01
Crystalline plasticity is strongly interlinked with dislocation mechanics and nowadays is relatively well understood. Concepts and physical models of plastic deformation in amorphous materials on the other hand—where the concept of linear lattice defects is not applicable—still are lagging behind. We introduce an eigenstrain-based finite element lattice model for simulations of shear band formation and strain avalanches. Our model allows us to study the influence of surfaces and finite size effects on the statistics of avalanches. We find that even with relatively complex loading conditions and open boundary conditions, critical exponents describing avalanche statistics are unchanged, which validates the use of simpler scalar lattice-based models to study these phenomena.
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
Commande de vol non lineaire d'un drone a voilure fixe par la methode du backstepping
NASA Astrophysics Data System (ADS)
Finoki, Edouard
This thesis describes the design of a non-linear controller for a UAV using the backstepping method. It is a fixed-wing UAV, the NexSTAR ARF from HobbicoRTM. The aim is to find the expressions of the aileron, the elevator, and the rudder deflection in order to command the flight path angle, the heading angle and the sideslip angle. Controlling the flight path angle allows a steady, climb or descent flight, controlling the heading cap allows to choose the heading and annul the sideslip angle allows an efficient flight. A good technical control has to ensure the stability of the system and provide optimal performances. Backstepping interlaces the choice of a Lyapunov function with the design of feedback control. This control technique works with the true non-linear model without any approximation. The procedure is to transform intermediate state variables into virtual inputs which will control other state variables. Advantages of this technique are its recursivity, its minimum control effort and its cascaded structure that allows dividing a high order system into several simpler lower order systems. To design this non-linear controller, a non-linear model of the UAV was used. Equations of motion are very accurate, aerodynamic coefficients result from interpolations between several essential variables in flight. The controller has been implemented in Matlab/Simulink and FlightGear.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Stein, George Juraj; Múcka, Peter; Chmúrny, Rudolf; Hinz, Barbara; Blüthner, Ralph
2007-01-01
For modelling purposes and for evaluation of driver's seat performance in the vertical direction various mechano-mathematical models of the seated human body have been developed and standardized by the ISO. No such models exist hitherto for human body sitting in an upright position in a cushioned seat upper part, used in industrial environment, where the fore-and-aft vibrations play an important role. The interaction with the steering wheel has to be taken into consideration, as well as, the position of the human body upper torso with respect to the cushioned seat back as observed in real driving conditions. This complex problem has to be simplified first to arrive at manageable simpler models, which still reflect the main problem features. In a laboratory study accelerations and forces in x-direction were measured at the seat base during whole-body vibration in the fore-and-aft direction (random signal in the frequency range between 0.3 and 30 Hz, vibration magnitudes 0.28, 0.96, and 2.03 ms(-2) unweighted rms). Thirteen male subjects with body masses between 62.2 and 103.6 kg were chosen for the tests. They sat on a cushioned driver seat with hands on a support and backrest contact in the lumbar region only. Based on these laboratory measurements a linear model of the system-seated human body and cushioned seat in the fore-and-aft direction has been developed. The model accounts for the reaction from the steering wheel. Model parameters have been identified for each subject-measured apparent mass values (modulus and phase). The developed model structure and the averaged parameters can be used for further bio-dynamical research in this field.
NASA Astrophysics Data System (ADS)
Ma, Fei; Su, Jing; Yao, Bing
2018-05-01
The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.
Do, Thanh Nhut; Gelin, Maxim F; Tan, Howe-Siang
2017-10-14
We derive general expressions that incorporate finite pulse envelope effects into a coherent two-dimensional optical spectroscopy (2DOS) technique. These expressions are simpler and less computationally intensive than the conventional triple integral calculations needed to simulate 2DOS spectra. The simplified expressions involving multiplications of arbitrary pulse spectra with 2D spectral response function are shown to be exactly equal to the conventional triple integral calculations of 2DOS spectra if the 2D spectral response functions do not vary with population time. With minor modifications, they are also accurate for 2D spectral response functions with quantum beats and exponential decay during population time. These conditions cover a broad range of experimental 2DOS spectra. For certain analytically defined pulse spectra, we also derived expressions of 2D spectra for arbitrary population time dependent 2DOS spectral response functions. Having simpler and more efficient methods to calculate experimentally relevant 2DOS spectra with finite pulse effect considered will be important in the simulation and understanding of the complex systems routinely being studied by using 2DOS.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across landscapes.
Cognitive and neural foundations of discrete sequence skill: a TMS study.
Ruitenberg, Marit F L; Verwey, Willem B; Schutter, Dennis J L G; Abrahamse, Elger L
2014-04-01
Executing discrete movement sequences typically involves a shift with practice from a relatively slow, stimulus-based mode to a fast mode in which performance is based on retrieving and executing entire motor chunks. The dual processor model explains the performance of (skilled) discrete key-press sequences in terms of an interplay between a cognitive processor and a motor system. In the present study, we tested and confirmed the core assumptions of this model at the behavioral level. In addition, we explored the involvement of the pre-supplementary motor area (pre-SMA) in discrete sequence skill by applying inhibitory 20 min 1-Hz off-line repetitive transcranial magnetic stimulation (rTMS). Based on previous work, we predicted pre-SMA involvement in the selection/initiation of motor chunks, and this was confirmed by our results. The pre-SMA was further observed to be more involved in more complex than in simpler sequences, while no evidence was found for pre-SMA involvement in direct stimulus-response translations or associative learning processes. In conclusion, support is provided for the dual processor model, and for pre-SMA involvement in the initiation of motor chunks. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Vincent, R. K.
1974-01-01
Four independent investigations are reported; in general these are concerned with improving and utilizing the correlation between the physical properties of natural materials as evidenced in laboratory spectra and spectral data collected by multispectral scanners. In one investigation, two theoretical models were devised that permit the calculation of spectral emittance spectra for rock and mineral surfaces of various particle sizes. The simpler of the two models can be used to qualitatively predict the effect of texture on the spectral emittance of rocks and minerals; it is also potentially useful as an aid in predicting the identification of natural atmospheric aerosol constituents. The second investigation determined, via an infrared ratio imaging technique, the best pair of infrared filters for silicate rock-type discrimination. In a third investigation, laboratory spectra of natural materials were compressed into 11-digit ratio codes for use in feature selection, in searches for false alarm candidates, and eventually for use as training sets in completely automatic data processors. In the fourth investigation, general outlines of a ratio preprocessor and an automatic recognition map processor are developed for on-board data processing in the space shuttle era.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Eye evolution at high resolution: the neuron as a unit of homology.
Erclik, Ted; Hartenstein, Volker; McInnes, Roderick R; Lipshitz, Howard D
2009-08-01
Based on differences in morphology, photoreceptor-type usage and lens composition it has been proposed that complex eyes have evolved independently many times. The remarkable observation that different eye types rely on a conserved network of genes (including Pax6/eyeless) for their formation has led to the revised proposal that disparate complex eye types have evolved from a shared and simpler prototype. Did this ancestral eye already contain the neural circuitry required for image processing? And what were the evolutionary events that led to the formation of complex visual systems, such as those found in vertebrates and insects? The recent identification of unexpected cell-type homologies between neurons in the vertebrate and Drosophila visual systems has led to two proposed models for the evolution of complex visual systems from a simple prototype. The first, as an extension of the finding that the neurons of the vertebrate retina share homologies with both insect (rhabdomeric) and vertebrate (ciliary) photoreceptor cell types, suggests that the vertebrate retina is a composite structure, made up of neurons that have evolved from two spatially separate ancestral photoreceptor populations. The second model, based largely on the conserved role for the Vsx homeobox genes in photoreceptor-target neuron development, suggests that the last common ancestor of vertebrates and flies already possessed a relatively sophisticated visual system that contained a mixture of rhabdomeric and ciliary photoreceptors as well as their first- and second-order target neurons. The vertebrate retina and fly visual system would have subsequently evolved by elaborating on this ancestral neural circuit. Here we present evidence for these two cell-type homology-based models and discuss their implications.
Reusable Software Component Retrieval via Normalized Algebraic Specifications
1991-12-01
outputs. In fact, this method of query is simpler for matching since it relieves the system from the burden of generating a test set. Eichmann [Eich9l...September 1991. [Eich9l] Eichmann , David A., "Selecting Reusable Components Using Algebraic Specifications", Proceedings of the Second International...Technology Atlanta, Georgia 30332-0800 12. Dr. David Eichmann 1 Department of Statistics and Computer Science Knapp Hall West Virginia University Morgantown, West Virginia 26506 226
Differentially Constrained Motion Planning with State Lattice Motion Primitives
2012-02-01
datapoint distribution in such histograms to a scalar may be used . One example is Kullback - Leibler divergence; an even simpler method is a sum of ...the Coupled Layer Architecture for Robotic Autonomy (CLARAty) system at the Jet Propulsion Laboratory. This al- lowed us to test the application of ... good fit to extend the tree or the graph towards a random sample. However, by virtue of the regular structure of the state samples, lattice
Improved Electro-Optical Switches
NASA Technical Reports Server (NTRS)
Nelson, Bruce N.; Cooper, Ronald F.
1994-01-01
Improved single-pole, double-throw electro-optical switches operate in switching times less than microsecond developed for applications as optical communication systems and networks of optical sensors. Contain no moving parts. In comparison with some prior electro-optical switches, these are simpler and operate with smaller optical losses. Beam of light switched from one output path to other by applying, to electro-optical crystal, voltage causing polarization of beam of light to change from vertical to horizontal.