Designing Class Methods from Dataflow Diagrams
NASA Astrophysics Data System (ADS)
Shoval, Peretz; Kabeli-Shani, Judith
A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.
Reference coordinate systems: An update. Supplement 11
NASA Technical Reports Server (NTRS)
Mueller, Ivan I.
1988-01-01
A common requirement for all geodetic investigations is a well-defined coordinate system attached to the earth in some prescribed way, as well as a well-defined inertial coordinate system in which the motions of the terrestrial frame can be monitored. The paper deals with the problems encountered when establishing such coordinate systems and the transformations between them. In addition, problems related to the modeling of the deformable earth are discussed. This paper is an updated version of the earlier work, Reference Coordinate Systems for Earth Dynamics: A Preview, by the author.
Conceptual Model of Quantities, Units, Dimensions, and Values
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; DeKoenig, Hans-Peter; Burkhart, Roger; Espinoza, Huascar
2011-01-01
JPL collaborated with experts from industry and other organizations to develop a conceptual model of quantities, units, dimensions, and values based on the current work of the ISO 80000 committee revising the International System of Units & Quantities based on the International Vocabulary of Metrology (VIM). By providing support for ISO 80000 in SysML via the International Vocabulary of Metrology (VIM), this conceptual model provides, for the first time, a standard-based approach for addressing issues of unit coherence and dimensional analysis into the practice of systems engineering with SysML-based tools. This conceptual model provides support for two kinds of analyses specified in the International Vocabulary of Metrology (VIM): coherence of units as well as of systems of units, and dimension analysis of systems of quantities. To provide a solid and stable foundation, the model for defining quantities, units, dimensions, and values in SysML is explicitly based on the concepts defined in VIM. At the same time, the model library is designed in such a way that extensions to the ISQ (International System of Quantities) and SI Units (Systeme International d Unites) can be represented, as well as any alternative systems of quantities and units. The model library can be used to support SysML user models in various ways. A simple approach is to define and document libraries of reusable systems of units and quantities for reuse across multiple projects, and to link units and quantity kinds from these libraries to Unit and QuantityKind stereotypes defined in SysML user models.
NASA Astrophysics Data System (ADS)
Noffke, Benjamin W.
Carbon materials have the potential to replace some precious metals in renewable energy applications. These materials are particularly attractive because of the elemental abundance and relatively low nuclear mass of carbon, implying economically feasible and lightweight materials. Targeted design of carbon materials is hindered by the lack of fundamental understanding that is required to tailor their properties for the desired application. However, most available synthetic methods to create carbon materials involve harsh conditions that limit the control of the resulting structure. Without a well-defined structure, the system is too complex and fundamental studies cannot be definitive. This work seeks to gain fundamental understanding through the development and application of efficient computational models for these systems, in conjunction with experiments performed on soluble, well-defined graphene nanostructures prepared by our group using a bottom-up synthetic approach. Theory is used to determine mechanistic details for well-defined carbon systems in applications of catalysis and electrochemical transformations. The resulting computational models do well to explain previous observations of carbon materials and provide suggestions for future directions. However, as the system size of the nanostructures gets larger, the computational cost can become prohibitive. To reduce the computational scaling of quantum chemical calculations, a new fragmentation scheme has been developed that addresses the challenges of fragmenting conjugated molecules. By selecting fragments that retain important structural characteristics in graphene, a more efficient method is achieved. The new method paves the way for an automated, systematic fragmentation scheme of graphene molecules.
1981-03-01
tifiability is imposed; and the system designer now has a tool to evaluate how well the model describes the system . The algorithm is verified by checking its...xi I. Introduction In analyzing a system , the design engineer uses a mathematical model. The model, by its very definition, represents the system . It...number of G (See Eq (23).) can 18 give the designer a good indication of just how well the model defined by Eqs (1) through (3) describes the system
Interactive computer aided technology, evolution in the design/manufacturing process
NASA Technical Reports Server (NTRS)
English, C. H.
1975-01-01
A powerful computer-operated three dimensional graphic system and associated auxiliary computer equipment used in advanced design, production design, and manufacturing was described. This system has made these activities more productive than when using older and more conventional methods to design and build aerospace vehicles. With the use of this graphic system, designers are now able to define parts using a wide variety of geometric entities, define parts as fully surface 3-dimensional models as well as "wire-frame" models. Once geometrically defined, the designer is able to take section cuts of the surfaced model and automatically determine all of the section properties of the planar cut, lightpen detect all of the surface patches and automatically determine the volume and weight of the part. Further, his designs are defined mathematically at a degree of accuracy never before achievable.
Modelling safety of multistate systems with ageing components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna
An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics ofmore » the consecutive “m out of n: F” is presented as well.« less
Making the Invisible Visible: A Model for Delivery Systems in Adult Education
ERIC Educational Resources Information Center
Alex, Jennifer L.; Miller, Elizabeth A.; Platt, R. Eric; Rachal, John R.; Gammill, Deidra M.
2007-01-01
Delivery systems are not well defined in adult education. Therefore, this article reviews the multiple components that overlap to affect the adult learner and uses them to create a model for a comprehensive delivery system in adult education with these individual components as sub-systems that are interrelated and inter-locked. These components…
1988 Revisions to the 1978 National Fire-Danger Rating System
Robert E. Burgan
1988-01-01
The 1978 National Fire-Danger Rating System does not work well in the humid environment of the Eastern United States. System modifications to correct problems and their operational impact on System users are described. A new set of 20 fuel models is defined and compared graphically with the 1978 fuel models. Technical documentation of System changes is provided.
Arthur, J.K.; Taylor, R.E.
1986-01-01
As part of the Gulf Coast Regional Aquifer System Analysis (GC RASA) study, data from 184 geophysical well logs were used to define the geohydrologic framework of the Mississippi embayment aquifer system in Mississippi for flow model simulation. Five major aquifers of Eocene and Paleocene age were defined within this aquifer system in Mississippi. A computer data storage system was established to assimilate the information obtained from the geophysical logs. Computer programs were developed to manipulate the data to construct geologic sections and structure maps. Data from the storage system will be input to a five-layer, three-dimensional, finite-difference digital computer model that is used to simulate the flow dynamics in the five major aquifers of the Mississippi embayment aquifer system.
User Modeling in Adaptive Hypermedia Educational Systems
ERIC Educational Resources Information Center
Martins, Antonio Constantino; Faria, Luiz; Vaz de Carvalho, Carlos; Carrapatoso, Eurico
2008-01-01
This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling…
Goal Structuring Notation in a Radiation Hardening Assurance Case for COTS-Based Spacecraft
NASA Technical Reports Server (NTRS)
Witulski, A.; Austin, R.; Evans, J.; Mahadevan, N.; Karsai, G.; Sierawski, B.; LaBel, K.; Reed, R.
2016-01-01
The attached presentation is a summary of how mission assurance is supported by model-based representations of spacecraft systems that can define sub-system functionality and interfacing, reliability parameters, as well as detailing a new paradigm for assurance, a model-centric and not document-centric process.
Play-fairway analysis for geothermal exploration: Examples from the Great Basin, western USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siler, Drew L; Faulds, James E
2013-10-27
Elevated permeability within fault systems provides pathways for circulation of geothermal fluids. Future geothermal development depends on precise and accurate location of such fluid flow pathways in order to both accurately assess geothermal resource potential and increase drilling success rates. The collocation of geologic characteristics that promote permeability in a given geothermal system define the geothermal ‘fairway’, the location(s) where upflow zones are probable and where exploration efforts including drilling should be focused. We define the geothermal fairway as the collocation of 1) fault zones that are ideally oriented for slip or dilation under ambient stress conditions, 2) areas withmore » a high spatial density of fault intersections, and 3) lithologies capable of supporting dense interconnected fracture networks. Areas in which these characteristics are concomitant with both elevated temperature and fluids are probable upflow zones where economic-scale, sustainable temperatures and flow rates are most likely to occur. Employing a variety of surface and subsurface data sets, we test this ‘play-fairway’ exploration methodology on two Great Basin geothermal systems, the actively producing Brady’s geothermal system and a ‘greenfield’ geothermal prospect at Astor Pass, NV. These analyses, based on 3D structural and stratigraphic framework models, reveal subsurface characteristics about each system, well beyond the scope of standard exploration methods. At Brady’s, the geothermal fairways we define correlate well with successful production wells and pinpoint several drilling targets for maintaining or expanding production in the field. In addition, hot-dry wells within the Brady’s geothermal field lie outside our defined geothermal fairways. At Astor Pass, our play-fairway analysis provides for a data-based conceptual model of fluid flow within the geothermal system and indicates several targets for exploration drilling.« less
ERIC Educational Resources Information Center
Longenecker, Herbert E., Jr.; Yarbrough, David M.; Feinstein, David L.
2010-01-01
IS2002 has become a well defined standard for information systems curricula. The Data Management Association (DAMA 2006) curriculum framework defines a body of knowledge that points to a skill set that can enhance IS2002. While data management professionals are highly skilled individuals requiring as much as a decade of relevant experience before…
Precision agricultural systems: a model of integrative science and technology
USDA-ARS?s Scientific Manuscript database
In the world of science research, long gone are the days when investigations are done in isolation. More often than not, science funding starts with one or more well-defined challenges or problems, judged by society as high-priority and needing immediate attention. As such, problems are not defined...
Nonlinear stability in reaction-diffusion systems via optimal Lyapunov functions
NASA Astrophysics Data System (ADS)
Lombardo, S.; Mulone, G.; Trovato, M.
2008-06-01
We define optimal Lyapunov functions to study nonlinear stability of constant solutions to reaction-diffusion systems. A computable and finite radius of attraction for the initial data is obtained. Applications are given to the well-known Brusselator model and a three-species model for the spatial spread of rabies among foxes.
Equilibrator: Modeling Chemical Equilibria with Excel
ERIC Educational Resources Information Center
Vander Griend, Douglas A.
2011-01-01
Equilibrator is a Microsoft Excel program for learning about chemical equilibria through modeling, similar in function to EQS4WIN, which is no longer supported and does not work well with newer Windows operating systems. Similar to EQS4WIN, Equilibrator allows the user to define a system with temperature, initial moles, and then either total…
Models for discrete-time self-similar vector processes with application to network traffic
NASA Astrophysics Data System (ADS)
Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh
2003-07-01
The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.
A New Method for Conceptual Modelling of Information Systems
NASA Astrophysics Data System (ADS)
Gustas, Remigijus; Gustiene, Prima
Service architecture is not necessarily bound to the technical aspects of information system development. It can be defined by using conceptual models that are independent of any implementation technology. Unfortunately, the conventional information system analysis and design methods cover just a part of required modelling notations for engineering of service architectures. They do not provide effective support to maintain semantic integrity between business processes and data. Service orientation is a paradigm that can be applied for conceptual modelling of information systems. The concept of service is rather well understood in different domains. It can be applied equally well for conceptualization of organizational and technical information system components. This chapter concentrates on analysis of the differences between service-oriented modelling and object-oriented modelling. Service-oriented method is used for semantic integration of information system static and dynamic aspects.
Delin, G.N.; Almendinger, James Edward
1991-01-01
Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries like streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.
Delin, G.N.; Almendinger, James Edward
1993-01-01
Hydrogeologic mapping and numerical modeling were used to delineate zones of contribution to wells, defined as all parts of a ground-water-flow system that could supply water to a well. The zones of contribution delineated by use of numerical modeling have similar orientation (parallel to regional flow directions) but significantly different areas than the zones of contribution delineated by use of hydrogeologic mapping. Differences in computed areas of recharge are attributed to the capability of the numerical model to more accurately represent (1) the three-dimensional flow system, (2) hydrologic boundaries such as streams, (3) variable recharge, and (4) the influence of nearby pumped wells, compared to the analytical models.
Identification of propulsion systems
NASA Technical Reports Server (NTRS)
Merrill, Walter; Guo, Ten-Huei; Duyar, Ahmet
1991-01-01
This paper presents a tutorial on the use of model identification techniques for the identification of propulsion system models. These models are important for control design, simulation, parameter estimation, and fault detection. Propulsion system identification is defined in the context of the classical description of identification as a four step process that is unique because of special considerations of data and error sources. Propulsion system models are described along with the dependence of system operation on the environment. Propulsion system simulation approaches are discussed as well as approaches to propulsion system identification with examples for both air breathing and rocket systems.
NASA Astrophysics Data System (ADS)
Najafi, M. N.; Dashti-Naserabadi, H.
2018-03-01
In many situations we are interested in the propagation of energy in some portions of a three-dimensional system with dilute long-range links. In this paper, a sandpile model is defined on the three-dimensional small-world network with real dissipative boundaries and the energy propagation is studied in three dimensions as well as the two-dimensional cross-sections. Two types of cross-sections are defined in the system, one in the bulk and another in the system boundary. The motivation of this is to make clear how the statistics of the avalanches in the bulk cross-section tend to the statistics of the dissipative avalanches, defined in the boundaries as the concentration of long-range links (α ) increases. This trend is numerically shown to be a power law in a manner described in the paper. Two regimes of α are considered in this work. For sufficiently small α s the dominant behavior of the system is just like that of the regular BTW, whereas for the intermediate values the behavior is nontrivial with some exponents that are reported in the paper. It is shown that the spatial extent up to which the statistics is similar to the regular BTW model scales with α just like the dissipative BTW model with the dissipation factor (mass in the corresponding ghost model) m2˜α for the three-dimensional system as well as its two-dimensional cross-sections.
NASA Technical Reports Server (NTRS)
Kurien, J.; Nayak, P.; Williams, B.; Koga, Dennis (Technical Monitor)
1998-01-01
MPL is the language with which a modeler describes a system to be diagnosed or controlled by Livingstone. MPL is used to specify what are the components of the system, how they are interconnected, and how they behave both nominally and when failed. Component behavioral models used by Livingstone are described by a set of propositional, well-formed formula (wff). An understanding of well-formed formula, primitive component types specified through defcomponent, and device structure specified by defmodule, is essential to understanding of MPL, This document describes: welI-formed formula (wff): The basis for describing the behavior of a component in a system defvalues: Specifies the domain (legal values) of a variable defcomponent: Defines the modes, behaviors and mode transitions for primitive components deftnodule: Defines composite devices, consisting of interconnected components defrelation: A macro mechanism for expanding a complex wff according to the value of an argument forall: An iteration construct used to expand a wff or relation on a set of arguments defsymbol-expansion: A mechanism for naming a collection of symbols (eg the name of all valves in the system)
Agent-based models in translational systems biology
An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram
2013-01-01
Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989
A new method for qualitative simulation of water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Camara, A. S.; Pinheiro, M.; Antunes, M. P.; Seixas, M. J.
1987-11-01
A new dynamic modeling methodology, SLIN (Simulação Linguistica), allowing for the analysis of systems defined by linguistic variables, is presented. SLIN applies a set of logical rules avoiding fuzzy theoretic concepts. To make the transition from qualitative to quantitative modes, logical rules are used as well. Extensions of the methodology to simulation-optimization applications and multiexpert system modeling are also discussed.
Modelling of Biometric Identification System with Given Parameters Using Colored Petri Nets
NASA Astrophysics Data System (ADS)
Petrosyan, G.; Ter-Vardanyan, L.; Gaboutchian, A.
2017-05-01
Biometric identification systems use given parameters and function on the basis of Colored Petri Nets as a modelling language developed for systems in which communication, synchronization and distributed resources play an important role. Colored Petri Nets combine the strengths of Classical Petri Nets with the power of a high-level programming language. Coloured Petri Nets have both, formal intuitive and graphical presentations. Graphical CPN model consists of a set of interacting modules which include a network of places, transitions and arcs. Mathematical representation has a well-defined syntax and semantics, as well as defines system behavioural properties. One of the best known features used in biometric is the human finger print pattern. During the last decade other human features have become of interest, such as iris-based or face recognition. The objective of this paper is to introduce the fundamental concepts of Petri Nets in relation to tooth shape analysis. Biometric identification systems functioning has two phases: data enrollment phase and identification phase. During the data enrollment phase images of teeth are added to database. This record contains enrollment data as a noisy version of the biometrical data corresponding to the individual. During the identification phase an unknown individual is observed again and is compared to the enrollment data in the database and then system estimates the individual. The purpose of modeling biometric identification system by means of Petri Nets is to reveal the following aspects of the functioning model: the efficiency of the model, behavior of the model, mistakes and accidents in the model, feasibility of the model simplification or substitution of its separate components for more effective components without interfering system functioning. The results of biometric identification system modeling and evaluating are presented and discussed.
An Evaporative Cooling Model for Teaching Applied Psychrometrics
ERIC Educational Resources Information Center
Johnson, Donald M.
2004-01-01
Evaporative cooling systems are commonly used in controlled environment plant and animal production. These cooling systems operate based on well defined psychrometric principles. However, students often experience considerable difficulty in learning these principles when they are taught in an abstract, verbal manner. This article describes an…
Guo, Xiufang; Das, Mainak; Rumsey, John; Gonzalez, Mercedes; Stancescu, Maria; Hickman, James
2010-12-01
To date, the coculture of motoneurons (MNs) and skeletal muscle in a defined in vitro system has only been described in one study and that was between rat MNs and rat skeletal muscle. No in vitro studies have demonstrated human MN to rat muscle synapse formation, although numerous studies have attempted to implant human stem cells into rat models to determine if they could be of therapeutic use in disease or spinal injury models, although with little evidence of neuromuscular junction (NMJ) formation. In this report, MNs differentiated from human spinal cord stem cells, together with rat skeletal myotubes, were used to build a coculture system to demonstrate that NMJ formation between human MNs and rat skeletal muscles is possible. The culture was characterized by morphology, immunocytochemistry, and electrophysiology, while NMJ formation was demonstrated by immunocytochemistry and videography. This defined system provides a highly controlled reproducible model for studying the formation, regulation, maintenance, and repair of NMJs. The in vitro coculture system developed here will be an important model system to study NMJ development, the physiological and functional mechanism of synaptic transmission, and NMJ- or synapse-related disorders such as amyotrophic lateral sclerosis, as well as for drug screening and therapy design.
Formation and Human Risk of Carcinogenic Heterocyclic Amines Formed from Natural Precursors in Meat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knize, M G; Felton, J S
2004-11-22
A group of heterocyclic amines that are mutagens and rodent carcinogens form when meat is cooked to medium and well-done states. The precursors of these compounds are natural meat components: creatinine, amino acids and sugars. Defined model systems of dry-heated precursors mimic the amounts and proportions of heterocyclic amines found in meat. Results from model systems and cooking experiments suggest ways to reduce their formation and, thus, to reduce human intake. Human cancer epidemiology studies related to consumption of well-done meat products are listed and compared.
A Network Based Theory of Health Systems and Cycles of Well-being
Rhodes, Michael Grant
2013-01-01
There are two dominant approaches to describe and understand the anatomy of complete health and well-being systems internationally. Yet, neither approach has been able to either predict or explain occasional but dramatic crises in health and well-being systems around the world and in developed emerging market or developing country contexts. As the impacts of such events can be measured not simply in terms of their social and economic consequences but also public health crises, there is a clear need to look for and formulate an alternative approach. This paper examines multi-disciplinary theoretical evidence to suggest that health systems exhibit natural and observable systemic and long cycle characteristics that can be modelled. A health and well-being system model of two slowly evolving anthropological network sub-systems is defined. The first network sub-system consists of organised professional networks of exclusive suppliers of health and well-being services. The second network sub-system consists of communities organising themselves to resource those exclusive services. Together these two network sub-systems interact to form the specific (sovereign) health and well-being systems we know today. But the core of a truly ‘complex adaptive system’ can also be identified and a simplified two sub-system model of recurring Lotka-Volterra predator-prey cycles is specified. The implications of such an adaptive and evolving model of system anatomy for effective public health, social security insurance and well-being systems governance could be considerable. PMID:24596831
Spatiotemporal patterns in reaction-diffusion system and in a vibrated granular bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swinney, H.L.; Lee, K.J.; McCormick, W.D.
Experiments on a quasi-two-dimensional reaction-diffusion system reveal transitions from a uniform state to stationary hexagonal, striped, and rhombic spatial patterns. For other reactor conditions lamellae and self-replicating spot patterns are observed. These patterns form in continuously fed thin gel reactors that can be maintained indefinitely in well-defined nonequilibrium states. Reaction-diffusion models with two chemical species yield patterns similar to those observed in the experiments. Pattern formation is also being examined in vertically oscillated thin granular layers (typically 3-30 particle diameters deep). For small acceleration amplitudes, a granular layer is flat, but above a well-defined critical acceleration amplitude, spatial patterns spontaneouslymore » form. Disordered time-dependent granular patterns are observed as well as regular patterns of squares, stripes, and hexagons. A one-dimensional model consisting of a completely inelastic ball colliding with a sinusoidally oscillating platform provides a semi-quantitative description of most of the observed bifurcations between the different spatiotemporal regimes.« less
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
Audiomagnetotelluric investigation of Snake Valley, eastern Nevada and western Utah
McPhee, Darcy K.; Pari, Keith; Baird, Frank
2009-01-01
As support for an exploratory well-drilling and hydraulic-testing program, AMT data were collected using a Geometrics Stratagem EH4 system along four profiles that extend roughly east-west from the southern Snake Range into Snake Valley. The profiles range from 3 to 5 kilometers in length, and station spacing was 200 to 400 meters. Two-dimensional inverse models were computed using the data from the transverse-electric (TE), transverse-magnetic (TM), and combined (TE+TM) mode using a conjugate gradient, finite-difference method. Interpretation of the 2-D AMT models defines several faults, some of which may influence ground-water flow in the basins, as well as identify underlying Paleozoic carbonate and clastic rocks and the thickness of basin-fill sediments. These AMT data and models, coupled with the geologic mapping and other surface geophysical methods, form the basis for identifying potential well sites and defining the subsurface structures and stratigraphy within Snake Valley.
Lemke, Heinz U; Berliner, Leonard
2011-05-01
Appropriate use of information and communication technology (ICT) and mechatronic (MT) systems is viewed by many experts as a means to improve workflow and quality of care in the operating room (OR). This will require a suitable information technology (IT) infrastructure, as well as communication and interface standards, such as specialized extensions of DICOM, to allow data interchange between surgical system components in the OR. A design of such an infrastructure, sometimes referred to as surgical PACS, but better defined as a Therapy Imaging and Model Management System (TIMMS), will be introduced in this article. A TIMMS should support the essential functions that enable and advance image guided therapy, and in the future, a more comprehensive form of patient-model guided therapy. Within this concept, the "image-centric world view" of the classical PACS technology is complemented by an IT "model-centric world view". Such a view is founded in the special patient modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. The modelling aspects refer to both patient information and workflow modelling. Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient OR. The DICOM Working Group 24 (WG-24) has been established to develop DICOM objects and services related to image and model guided surgery. To determine these standards, it is important to define step-by-step surgical workflow practices and create interventional workflow models per procedures or per variable cases. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG-24 is to serve the therapeutic disciplines by enabling modelling technology to be based on standards. Copyright © 2011. Published by Elsevier Ireland Ltd.
Zolkind, Paul; Przybylski, Dariusz; Marjanovic, Nemanja; Nguyen, Lan; Lin, Tianxiang; Johanns, Tanner; Alexandrov, Anton; Zhou, Liye; Allen, Clint T.; Miceli, Alexander P.; Schreiber, Robert D.; Artyomov, Maxim; Dunn, Gavin P.; Uppaluri, Ravindra
2018-01-01
Head and neck squamous cell carcinomas (HNSCC) are an ideal immunotherapy target due to their high mutation burden and frequent infiltration with lymphocytes. Preclinical models to investigate targeted and combination therapies as well as defining biomarkers to guide treatment represent an important need in the field. Immunogenomics approaches have illuminated the role of mutation-derived tumor neoantigens as potential biomarkers of response to checkpoint blockade as well as representing therapeutic vaccines. Here, we aimed to define a platform for checkpoint and other immunotherapy studies using syngeneic HNSCC cell line models (MOC2 and MOC22), and evaluated the association between mutation burden, predicted neoantigen landscape, infiltrating T cell populations and responsiveness of tumors to anti-PD1 therapy. We defined dramatic hematopoietic cell transcriptomic alterations in the MOC22 anti-PD1 responsive model in both tumor and draining lymph nodes. Using a cancer immunogenomics pipeline and validation with ELISPOT and tetramer analysis, we identified the H-2Kb-restricted ICAM1P315L (mICAM1) as a neoantigen in MOC22. Finally, we demonstrated that mICAM1 vaccination was able to protect against MOC22 tumor development defining mICAM1 as a bona fide neoantigen. Together these data define a pre-clinical HNSCC model system that provides a foundation for future investigations into combination and novel therapeutics. PMID:29423108
Information Systems Curricula: A Fifty Year Journey
ERIC Educational Resources Information Center
Longenecker, Herbert E., Jr.; Feinstein, David; Clark, Jon D.
2013-01-01
This article presents the results of research to explore the nature of changes in skills over a fifty year period spanning the life of Information Systems model curricula. Work begun in 1999 was expanded both backwards in time, as well as forwards to 2012 to define skills relevant to Information Systems curricula. The work in 1999 was based on job…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elia, Valerio; Gnoni, Maria Grazia, E-mail: mariagrazia.gnoni@unisalento.it; Tornese, Fabiana
Highlights: • Pay-As-You-Throw (PAYT) schemes are becoming widespread in several countries. • Economic, organizational and technological issues have to be integrated in an efficient PAYT model design. • Efficiency refers to a PAYT system which support high citizen participation rates as well as economic sustainability. • Different steps and constraints have to be evaluated from collection services to type technologies. • An holistic approach is discussed to support PAYT systems diffusion. - Abstract: Pay-As-You-Throw (PAYT) strategies are becoming widely applied in solid waste management systems; the main purpose is to support a more sustainable – from economic, environmental and socialmore » points of view – management of waste flows. Adopting PAYT charging models increases the complexity level of the waste management service as new organizational issues have to be evaluated compared to flat charging models. In addition, innovative technological solutions could also be adopted to increase the overall efficiency of the service. Unit pricing, user identification and waste measurement represent the three most important processes to be defined in a PAYT system. The paper proposes a holistic framework to support an effective design and management process. The framework defines most critical processes and effective organizational and technological solutions for supporting waste managers as well as researchers.« less
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Singer, Jefferson A; Blagov, Pavel; Berry, Meredith; Oost, Kathryn M
2013-12-01
An integrative model of narrative identity builds on a dual memory system that draws on episodic memory and a long-term self to generate autobiographical memories. Autobiographical memories related to critical goals in a lifetime period lead to life-story memories, which in turn become self-defining memories when linked to an individual's enduring concerns. Self-defining memories that share repetitive emotion-outcome sequences yield narrative scripts, abstracted templates that filter cognitive-affective processing. The life story is the individual's overarching narrative that provides unity and purpose over the life course. Healthy narrative identity combines memory specificity with adaptive meaning-making to achieve insight and well-being, as demonstrated through a literature review of personality and clinical research, as well as new findings from our own research program. A clinical case study drawing on this narrative identity model is also presented with implications for treatment and research. © 2012 Wiley Periodicals, Inc.
Flight Dynamic Model Exchange using XML
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2002-01-01
The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.
He, Temple; Habib, Salman
2013-09-01
Simple dynamical systems--with a small number of degrees of freedom--can behave in a complex manner due to the presence of chaos. Such systems are most often (idealized) limiting cases of more realistic situations. Isolating a small number of dynamical degrees of freedom in a realistically coupled system generically yields reduced equations with terms that can have a stochastic interpretation. In situations where both noise and chaos can potentially exist, it is not immediately obvious how Lyapunov exponents, key to characterizing chaos, should be properly defined. In this paper, we show how to do this in a class of well-defined noise-driven dynamical systems, derived from an underlying Hamiltonian model.
Gas network model allows full reservoir coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Methnani, M.M.
The gas-network flow model (Gasnet) developed for and added to an existing Qatar General Petroleum Corp. (OGPC) in-house reservoir simulator, allows improved modeling of the interaction among the reservoir, wells, and pipeline networks. Gasnet is a three-phase model that is modified to handle gas-condensate systems. The numerical solution is based on a control volume scheme that uses the concept of cells and junctions, whereby pressure and phase densities are defined in cells, while phase flows are defined at junction links. The model features common numerical equations for the reservoir, the well, and the pipeline components and an efficient state-variable solutionmore » method in which all primary variables including phase flows are solved directly. Both steady-state and transient flow events can be simulated with the same tool. Three test cases show how the model runs. One case simulates flow redistribution in a simple two-branch gas network. The second simulates a horizontal gas well in a waterflooded gas reservoir. The third involves an export gas pipeline coupled to a producing reservoir.« less
Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.
Blobel, B G M E; Engel, K; Pharow, P
2006-01-01
To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.
Development of guidelines for the definition of the relavant information content in data classes
NASA Technical Reports Server (NTRS)
Schmitt, E.
1973-01-01
The problem of experiment design is defined as an information system consisting of information source, measurement unit, environmental disturbances, data handling and storage, and the mathematical analysis and usage of data. Based on today's concept of effective computability, general guidelines for the definition of the relevant information content in data classes are derived. The lack of a universally applicable information theory and corresponding mathematical or system structure is restricting the solvable problem classes to a small set. It is expected that a new relativity theory of information, generally described by a universal algebra of relations will lead to new mathematical models and system structures capable of modeling any well defined practical problem isomorphic to an equivalence relation at any corresponding level of abstractness.
Defining and reconstructing clinical processes based on IHE and BPMN 2.0.
Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef
2011-01-01
This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.
Nonlinear optics of astaxanthin thin films
NASA Astrophysics Data System (ADS)
Esser, A.; Fisch, Herbert; Haas, Karl-Heinz; Haedicke, E.; Paust, J.; Schrof, Wolfgang; Ticktin, Anton
1993-02-01
Carotinoids exhibit large nonlinear optical properties due to their extended (pi) -electron system. Compared to other polyenes which show a broad distribution of conjugation lengths, carotinoids exhibit a well defined molecular structure, i.e. a well defined conjugation length. Therefore the carotinoid molecules can serve as model compounds to study the relationship between structure and nonlinear optical properties. In this paper the synthesis of four astaxanthins with C-numbers ranging from 30 to 60, their preparation into thin films, wavelength dispersive Third Harmonic Generation (THG) measurements and some molecular modelling calculations will be presented. Resonant (chi) (3) values reach 1.2(DOT)10-10 esu for C60 astaxanthin. In the nonresonant regime a figure of merit (chi) (3)/(alpha) of several 10-13 esu-cm is demonstrated.
Information Sharing for Computing Trust Metrics on COTS Electronic Components
2008-09-01
8 a. Standard SDLCs ...........................8 b. The Waterfall Model ......................9 c. V -shaped Model ...development of a system. There are many well-known SDLC models , the most popular of which are: • Waterfall • V -shaped • Spiral • Agile a. Standard...the SDLC or applied to software and hardware distribution chain. A. JØSANG’S MODEL DEFINED Jøsang expresses "opinions" mathematically as: 1
An expert system for municipal solid waste management simulation analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, M.C.; Chang, N.B.
1996-12-31
Optimization techniques were usually used to model the complicated metropolitan solid waste management system to search for the best dynamic combination of waste recycling, facility siting, and system operation, where sophisticated and well-defined interrelationship are required in the modeling process. But this paper applied the Concurrent Object-Oriented Simulation (COOS), a new simulation software construction method, to bridge the gap between the physical system and its computer representation. The case study of Kaohsiung solid waste management system in Taiwan is prepared for the illustration of the analytical methodology of COOS and its implementation in the creation of an expert system.
Fate of classical solitons in one-dimensional quantum systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pustilnik, M.; Matveev, K. A.
We study one-dimensional quantum systems near the classical limit described by the Korteweg-de Vries (KdV) equation. The excitations near this limit are the well-known solitons and phonons. The classical description breaks down at long wavelengths, where quantum effects become dominant. Focusing on the spectra of the elementary excitations, we describe analytically the entire classical-to-quantum crossover. We show that the ultimate quantum fate of the classical KdV excitations is to become fermionic quasiparticles and quasiholes. We discuss in detail two exactly solvable models exhibiting such crossover, the Lieb-Liniger model of bosons with weak contact repulsion and the quantum Toda model, andmore » argue that the results obtained for these models are universally applicable to all quantum one-dimensional systems with a well-defined classical limit described by the KdV equation.« less
An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems
NASA Astrophysics Data System (ADS)
Hieb, Jeffrey; Graham, James; Guan, Jian
This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.
From guideline modeling to guideline execution: defining guideline-based decision-support services.
Tu, S. W.; Musen, M. A.
2000-01-01
We describe our task-based approach to defining the guideline-based decision-support services that the EON system provides. We categorize uses of guidelines in patient-specific decision support into a set of generic tasks--making of decisions, specification of work to be performed, interpretation of data, setting of goals, and issuance of alert and reminders--that can be solved using various techniques. Our model includes constructs required for representing the knowledge used by these techniques. These constructs form a toolkit from which developers can select modeling solutions for guideline task. Based on the tasks and the guideline model, we define a guideline-execution architecture and a model of interactions between a decision-support server and clients that invoke services provided by the server. These services use generic interfaces derived from guideline tasks and their associated modeling constructs. We describe two implementations of these decision-support services and discuss how this work can be generalized. We argue that a well-defined specification of guideline-based decision-support services will facilitate sharing of tools that implement computable clinical guidelines. PMID:11080007
Topological magnetoelectric pump in three dimensions
NASA Astrophysics Data System (ADS)
Fukui, Takahiro; Fujiwara, Takanori
2017-11-01
We study the topological pump for a lattice fermion model mainly in three spatial dimensions. We first calculate the U(1) current density for the Dirac model defined in continuous space-time to review the known results as well as to introduce some technical details convenient for the calculations of the lattice model. We next investigate the U(1) current density for a lattice fermion model, a variant of the Wilson-Dirac model. The model we introduce is defined on a lattice in space but in continuous time, which is suited for the study of the topological pump. For such a model, we derive the conserved U(1) current density and calculate it directly for the (1 +1 )-dimensional system as well as (3 +1 )-dimensional system in the limit of the small lattice constant. We find that the current includes a nontrivial lattice effect characterized by the Chern number, and therefore the pumped particle number is quantized by the topological reason. Finally, we study the topological temporal pump in 3 +1 dimensions by numerical calculations. We discuss the relationship between the second Chern number and the first Chern number, the bulk-edge correspondence, and the generalized Streda formula which enables us to compute the second Chern number using the spectral asymmetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, G.S. Jr.; Howarth, R.J.; Schuenemeyer, J.H.
1981-02-01
We have developed a procedure that can help quadrangle evaluators to systematically summarize and use hydrogeochemical and stream sediment reconnaissance (HSSR) and occurrence data. Although we have not provided an independent estimate of uranium endowment, we have devised a methodology that will provide this independent estimate when additional calibration is done by enlarging the study area. Our statistical model for evaluation (system EVAL) ranks uranium endowment for each quadrangle. Because using this model requires experience in geology, statistics, and data analysis, we have also devised a simplified model, presented in the package SURE, a System for Uranium Resource Evaluation. Wemore » have developed and tested these models for the four quadrangles in southern Colorado that comprise the study area; to investigate their generality, the models should be applied to other quandrangles. Once they are calibrated with accepted uranium endowments for several well-known quadrangles, the models can be used to give independent estimates for less-known quadrangles. The point-oriented models structure the objective comparison of the quandrangles on the bases of: (1) Anomalies (a) derived from stream sediments, (b) derived from waters (stream, well, pond, etc.), (2) Geology (a) source rocks, as defined by the evaluator, (b) host rocks, as defined by the evaluator, and (3) Aerial radiometric anomalies.« less
Intelligent Entity Behavior Within Synthetic Environments. Chapter 3
NASA Technical Reports Server (NTRS)
Kruk, R. V.; Howells, P. B.; Siksik, D. N.
2007-01-01
This paper describes some elements in the development of realistic performance and behavior in the synthetic entities (players) which support Modeling and Simulation (M&S) applications, particularly military training. Modern human-in-the-loop (virtual) training systems incorporate sophisticated synthetic environments, which provide: 1. The operational environment, including, for example, terrain databases; 2. Physical entity parameters which define performance in engineered systems, such as aircraft aerodynamics; 3. Platform/system characteristics such as acoustic, IR and radar signatures; 4. Behavioral entity parameters which define interactive performance, including knowledge/reasoning about terrain, tactics; and, 5. Doctrine, which combines knowledge and tactics into behavior rule sets. The resolution and fidelity of these model/database elements can vary substantially, but as synthetic environments are designed to be compose able, attributes may easily be added (e.g., adding a new radar to an aircraft) or enhanced (e.g. Amending or replacing missile seeker head/ Electronic Counter Measures (ECM) models to improve the realism of their interaction). To a human in the loop with synthetic entities, their observed veridicality is assessed via engagement responses (e.g. effect of countermeasures upon a closing missile), as seen on systems displays, and visual (image) behavior. The realism of visual models in a simulation (level of detail as well as motion fidelity) remains a challenge in realistic articulation of elements such as vehicle antennae and turrets, or, with human figures; posture, joint articulation, response to uneven ground. Currently the adequacy of visual representation is more dependant upon the quality and resolution of the physical models driving those entities than graphics processing power per Se. Synthetic entities in M&S applications traditionally have represented engineered systems (e.g. aircraft) with human-in-the-loop performance characteristics (e.g. visual acuity) included in the system behavioral specification. As well, performance affecting human parameters such as experience level, fatigue and stress are coming into wider use (via AI approaches) to incorporate more uncertainty as to response type as well as performance (e.g. Where an opposing entity might go and what it might do, as well as how well it might perform).
DAWN (Design Assistant Workstation) for advanced physical-chemical life support systems
NASA Technical Reports Server (NTRS)
Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.
1989-01-01
This paper reports the results of a project supported by the National Aeronautics and Space Administration, Office of Aeronautics and Space Technology (NASA-OAST) under the Advanced Life Support Development Program. It is an initial attempt to integrate artificial intelligence techniques (via expert systems) with conventional quantitative modeling tools for advanced physical-chemical life support systems. The addition of artificial intelligence techniques will assist the designer in the definition and simulation of loosely/well-defined life support processes/problems as well as assist in the capture of design knowledge, both quantitative and qualitative. Expert system and conventional modeling tools are integrated to provide a design workstation that assists the engineer/scientist in creating, evaluating, documenting and optimizing physical-chemical life support systems for short-term and extended duration missions.
NASA Technical Reports Server (NTRS)
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
Trust-based information system architecture for personal wellness.
Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd
2014-01-01
Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.
Pool, D.R.; Blasch, Kyle W.; Callegary, James B.; Leake, Stanley A.; Graser, Leslie F.
2011-01-01
A numerical flow model (MODFLOW) of the groundwater flow system in the primary aquifers in northern Arizona was developed to simulate interactions between the aquifers, perennial streams, and springs for predevelopment and transient conditions during 1910 through 2005. Simulated aquifers include the Redwall-Muav, Coconino, and basin-fill aquifers. Perennial stream reaches and springs that derive base flow from the aquifers were simulated, including the Colorado River, Little Colorado River, Salt River, Verde River, and perennial reaches of tributary streams. Simulated major springs include Blue Spring, Del Rio Springs, Havasu Springs, Verde River headwater springs, several springs that discharge adjacent to major Verde River tributaries, and many springs that discharge to the Colorado River. Estimates of aquifer hydraulic properties and groundwater budgets were developed from published reports and groundwater-flow models. Spatial extents of aquifers and confining units were developed from geologic data, geophysical models, a groundwater-flow model for the Prescott Active Management Area, drill logs, geologic logs, and geophysical logs. Spatial and temporal distributions of natural recharge were developed by using a water-balance model that estimates recharge from direct infiltration. Additional natural recharge from ephemeral channel infiltration was simulated in alluvial basins. Recharge at wastewater treatment facilities and incidental recharge at agricultural fields and golf courses were also simulated. Estimates of predevelopment rates of groundwater discharge to streams, springs, and evapotranspiration by phreatophytes were derived from previous reports and on the basis of streamflow records at gages. Annual estimates of groundwater withdrawals for agriculture, municipal, industrial, and domestic uses were developed from several sources, including reported withdrawals for nonexempt wells, estimated crop requirements for agricultural wells, and estimated per capita water use for exempt wells. Accuracy of the simulated groundwater-flow system was evaluated by using observational control from water levels in wells, estimates of base flow from streamflow records, and estimates of spring discharge. Major results from the simulations include the importance of variations in recharge rates throughout the study area and recharge along ephemeral and losing stream reaches in alluvial basins. Insights about the groundwater-flow systems in individual basins include the hydrologic influence of geologic structures in some areas and that stream-aquifer interactions along the lower part of the Little Colorado River are an effective control on water level distributions throughout the Little Colorado River Plateau basin. Better information on several aspects of the groundwater flow system are needed to reduce uncertainty of the simulated system. Many areas lack documentation of the response of the groundwater system to changes in withdrawals and recharge. Data needed to define groundwater flow between vertically adjacent water-bearing units is lacking in many areas. Distributions of recharge along losing stream reaches are poorly defined. Extents of aquifers and alluvial lithologies are poorly defined in parts of the Big Chino and Verde Valley sub-basins. Aquifer storage properties are poorly defined throughout most of the study area. Little data exist to define the hydrologic importance of geologic structures such as faults and fractures. Discharge of regional groundwater flow to the Verde River is difficult to identify in the Verde Valley sub-basin because of unknown contributions from deep percolation of excess surface water irrigation.
NASA Astrophysics Data System (ADS)
Pasqualini, D.; Witkowski, M.
2005-12-01
The Critical Infrastructure Protection / Decision Support System (CIP/DSS) project, supported by the Science and Technology Office, has been developing a risk-informed Decision Support System that provides insights for making critical infrastructure protection decisions. The system considers seventeen different Department of Homeland Security defined Critical Infrastructures (potable water system, telecommunications, public health, economics, etc.) and their primary interdependencies. These infrastructures have been modeling in one model called CIP/DSS Metropolitan Model. The modeling approach used is a system dynamics modeling approach. System dynamics modeling combines control theory and the nonlinear dynamics theory, which is defined by a set of coupled differential equations, which seeks to explain how the structure of a given system determines its behavior. In this poster we present a system dynamics model for one of the seventeen critical infrastructures, a generic metropolitan potable water system (MPWS). Three are the goals: 1) to gain a better understanding of the MPWS infrastructure; 2) to identify improvements that would help protect MPWS; and 3) to understand the consequences, interdependencies, and impacts, when perturbations occur to the system. The model represents raw water sources, the metropolitan water treatment process, storage of treated water, damage and repair to the MPWS, distribution of water, and end user demand, but does not explicitly represent the detailed network topology of an actual MPWS. The MPWS model is dependent upon inputs from the metropolitan population, energy, telecommunication, public health, and transportation models as well as the national water and transportation models. We present modeling results and sensitivity analysis indicating critical choke points, negative and positive feedback loops in the system. A general scenario is also analyzed where the potable water system responds to a generic disruption.
The University Münster Model Surgery System for Orthognathic Surgery. Part II -- KD-MMS.
Ehmer, Ulrike; Joos, Ulrich; Ziebura, Thomas; Flieger, Stefanie; Wiechmann, Dirk
2013-01-04
Model surgery is an integral part of the planning procedure in orthognathic surgery. Most concepts comprise cutting the dental cast off its socket. The standardized spacer plates of the KD-MMS provide for a non-destructive, reversible and reproducible means of maxillary and/or mandibular plaster cast separation. In the course of development of the system various articulator types were evaluated with regard to their capability to provide a means of realizing the concepts comprised of the KD-MMS. Special attention was dedicated to the ability to perform three-dimensional displacements without cutting of plaster casts. Various utilities were developed to facilitate maxillary displacement in accordance to the planning. Objectives of this development comprised the ability to implement the values established in the course of two-dimensional ceph planning. The system - KD-MMS comprises a set of hardware components as well as a defined procedure. Essential hardware components are red spacer and blue mounting plates. The blue mounting plates replace the standard yellow SAM mounting elements. The red spacers provide for a defined leeway of 8 mm for three-dimensional movements. The non-destructive approach of the KD-MMS makes it possible to conduct different model surgeries with the same plaster casts as well as to restore the initial, pre-surgical situation at any time. Thereby, surgical protocol generation and gnathologic splint construction are facilitated. The KD-MMS hardware components in conjunction with the defined procedures are capable of increasing efficiency and accuracy of model surgery and splint construction. In cases where different surgical approaches need to be evaluated in the course of model surgery, a significant reduction of chair time may be achieved.
Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®
NASA Astrophysics Data System (ADS)
Nur Hidayat, Hardi; Gala Permana, Maximillian
2017-12-01
The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.
Autonomous navigation of structured city roads
NASA Astrophysics Data System (ADS)
Aubert, Didier; Kluge, Karl C.; Thorpe, Chuck E.
1991-03-01
Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.
A graph grammar approach to artificial life.
Kniemeyer, Ole; Buck-Sorlin, Gerhard H; Kurth, Winfried
2004-01-01
We present the high-level language of relational growth grammars (RGGs) as a formalism designed for the specification of ALife models. RGGs can be seen as an extension of the well-known parametric Lindenmayer systems and contain rule-based, procedural, and object-oriented features. They are defined as rewriting systems operating on graphs with the edges coming from a set of user-defined relations, whereas the nodes can be associated with objects. We demonstrate their ability to represent genes, regulatory networks of metabolites, and morphologically structured organisms, as well as developmental aspects of these entities, in a common formal framework. Mutation, crossing over, selection, and the dynamics of a network of gene regulation can all be represented with simple graph rewriting rules. This is demonstrated in some detail on the classical example of Dawkins' biomorphs and the ABC model of flower morphogenesis: other applications are briefly sketched. An interactive program was implemented, enabling the execution of the formalism and the visualization of the results.
Bartholomay, Roy C.; Twining, Brian V.
2010-01-01
From 2005 to 2008, the U.S. Geological Survey's Idaho National Laboratory (INL) Project office, in cooperation with the U.S. Department of Energy, collected water-quality samples from multiple water-bearing zones in the eastern Snake River Plain aquifer. Water samples were collected from six monitoring wells completed in about 350-700 feet of the upper part of the aquifer, and the samples were analyzed for major ions, selected trace elements, nutrients, selected radiochemical constituents, and selected stable isotopes. Each well was equipped with a multilevel monitoring system containing four to seven sampling ports that were each isolated by permanent packer systems. The sampling ports were installed in aquifer zones that were highly transmissive and that represented the water chemistry of the top four to five model layers of a steady-state and transient groundwater-flow model. The model's water chemistry and particle-tracking simulations are being used to better define movement of wastewater constituents in the aquifer. The results of the water chemistry analyses indicated that, in each of four separate wells, one zone of water differed markedly from the other zones in the well. In four wells, one zone to as many as five zones contained radiochemical constituents that originated from wastewater disposal at selected laboratory facilities. The multilevel sampling systems are defining the vertical distribution of wastewater constituents in the eastern Snake River Plain aquifer and the concentrations of wastewater constituents in deeper zones in wells Middle 2051, USGS 132, and USGS 103 support the concept of groundwater flow deepening in the southwestern part of the INL.
Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
World Energy Projection System Plus Model Documentation: Commercial Module
2016-01-01
The Commercial Model of the World Energy Projection System Plus (WEPS ) is an energy demand modeling system of the world commercial end?use sector at a regional level. This report describes the version of the Commercial Model that was used to produce the commercial sector projections published in the International Energy Outlook 2016 (IEO2016). The Commercial Model is one of 13 components of the WEPS system. The WEPS is a modular system, consisting of a number of separate energy models that are communicate and work with each other through an integrated system model. The model components are each developed independently, but are designed with well?defined protocols for system communication and interactivity. The WEPS modeling system uses a shared database (the “restart” file) that allows all the models to communicate with each other when they are run in sequence over a number of iterations. The overall WEPS system uses an iterative solution technique that forces convergence of consumption and supply pressures to solve for an equilibrium price.
Managing Variation in Services in a Software Product Line Context
2010-05-01
Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR-021, ADA235785). Software Engineering Institute, Carnegie Mellon University, 1990...the systems in the product line, and a plan for building the systems. Product line scope and product line analysis define the boundaries and...systems, as well as expected ways in which they may vary. Product line analysis applies established modeling techniques to engineer the common and
First-Order Open-Universe POMDPs: Formulation and Algorithms
2013-12-25
A full DTBLOG model for the airport domain example is presented in the appendix. Let M ( S , A ) be a DTBLOG model defined using the sets S and A of...respectively. IfM satisfies BLOG’s requirements for well-defined probabilistic models then it constitutes a well-defined OUPOMDP model wrt S and A . Actions on... a first-order, open-universe language to describe POMDPs and identify non-trivial representational issues in describing an agent? s
NASA Astrophysics Data System (ADS)
Glavev, Victor
2016-12-01
The types of software applications used by public administrations can be divided in three main groups: document management systems, record management systems and business process systems. Each one of them generates outputs that can be used as input data to the others. This is the main reason that requires exchange of data between these three groups and well defined models that should be followed. There are also many other reasons that will be discussed in the paper. Interoperability is a key aspect when those models are implemented, especially when there are different manufactures of systems in the area of software applications used by public authorities. The report includes examples of implementation of models for exchange of data between software systems deployed in one of the biggest administration in Bulgaria.
NASA Technical Reports Server (NTRS)
Adams, Douglas S.; Wu, Shih-Chin
2006-01-01
The MARSIS antenna booms are constructed using lenticular hinges between straight boom segments in a novel design which allows the booms to be extremely lightweight while retaining a high stiffness and well defined structural properties once they are deployed. Lenticular hinges are elegant in form but are complicated to model as they deploy dynamically and require highly specialized nonlinear techniques founded on carefully measured mechanical properties. Results from component level testing were incorporated into a highly specialized ADAMS model which employed an automated damping algorithm to account for the discontinuous boom lengths formed during the deployment. Additional models with more limited capabilities were also developed in both DADS and ABAQUS to verify the ADAMS model computations and to help better define the numerical behavior of the models at the component and system levels. A careful comparison is made between the ADAMS and DADS models in a series of progressive steps in order to verify their numerical results. Different trade studies considered in the model development are outlined to demonstrate a suitable level of model fidelity. Some model sensitivities to various parameters are explored using subscale and full system models. Finally, some full system DADS models are exercised to illustrate the limitations of traditional modeling techniques for variable geometry systems which were overcome in the ADAMS model.
Simulation supported scenario analysis for water resources planning: a case study in northern italy
NASA Astrophysics Data System (ADS)
Facchi, A.; Gandolfi, C.; Ortuani, B.; Maggi, D.
2003-04-01
The work presents the results of a comprehensive modelling study of surface and groundwater systems, including the interaction between irrigation and groundwater resources, for the Muzza-Bassa Lodigiana irrigation district, placed in the southern part of the densely-settled Lombardia plain (northern Italy). The area, of approximately 700 km2, has been selected as: a) it is representative of agricultural and irrigation practices in a wide portion of the plain of Lombardia; b) it has well defined hydrogeological borders, represented by the Adda, Po, and Lambro rivers (respectively East, South and West) and by the Muzza canal (North). The objective of the study is to assess the impact of land use and irrigation water availability on the distribution of crop water consumption in space and time, as well as on the groundwater resources in this wide portion of the Lombardia plain. To achieve this goal, a number of realistic management scenarios, currently under discussion with the regional water authority, have been taken into account. A standard 'base case' has been defined to allow comparative analysis of the results of different scenarios. To carry out the research, an integrated, distributed, catchment-scale simulation package, already developed and applied to the study area, has been used. The simulation system is based on the integration of two hydrological models - a conceptual vadose zone model and the groundwater model MODFLOW. An interface performs the explicit coupling in space and time between the two models. A GIS manages all the information relevant to the study area, as well as all the input, the spatially distributed parameters and the output of the system. The simulation package has been verified for the years 1999-2000 using land use derived from remote-sensed images, reported water availability for irrigation, observed water stage in rivers as well as groundwater level in the alluvial aquifer system.
Fractional discrete-time consensus models for single- and double-summator dynamics
NASA Astrophysics Data System (ADS)
Wyrwas, Małgorzata; Mozyrska, Dorota; Girejko, Ewa
2018-04-01
The leader-following consensus problem of fractional-order multi-agent discrete-time systems is considered. In the systems, interactions between opinions are defined like in Krause and Cucker-Smale models but the memory is included by taking the fractional-order discrete-time operator on the left-hand side of the nonlinear systems. In this paper, we investigate fractional-order models of opinions for the single- and double-summator dynamics of discrete-time by analytical methods as well as by computer simulations. The necessary and sufficient conditions for the leader-following consensus are formulated by proposing a consensus control law for tracking the virtual leader.
A Family of Well-Clear Boundary Models for the Integration of UAS in the NAS
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Narkawicz, Anthony; Chamberlain, James; Consiglio, Maria; Upchurch, Jason
2014-01-01
The FAA-sponsored Sense and Avoid Workshop for Unmanned Aircraft Systems (UAS) defines the concept of sense and avoid for remote pilots as "the capability of a UAS to remain well clear from and avoid collisions with other airborne traffic." Hence, a rigorous definition of well clear is fundamental to any separation assurance concept for the integration of UAS into civil airspace. This paper presents a family of well-clear boundary models based on the TCAS II Resolution Advisory logic. For these models, algorithms that predict well-clear violations along aircraft current trajectories are provided. These algorithms are analogous to conflict detection algorithms but instead of predicting loss of separation, they predict whether well-clear violations will occur during a given lookahead time interval. Analytical techniques are used to study the properties and relationships satisfied by the models.
Generic Sensor Failure Modeling for Cooperative Systems.
Jäger, Georg; Zug, Sebastian; Casimiro, António
2018-03-20
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.
Surface-directed capillary system; theory, experiments and applications.
Bouaidat, Salim; Hansen, Ole; Bruus, Henrik; Berendsen, Christian; Bau-Madsen, Niels Kristian; Thomsen, Peter; Wolff, Anders; Jonsmann, Jacques
2005-08-01
We present a capillary flow system for liquid transport in microsystems. Our simple microfluidic system consists of two planar parallel surfaces, separated by spacers. One of the surfaces is entirely hydrophobic, the other mainly hydrophobic, but with hydrophilic pathways defined on it by photolithographic means. By controlling the wetting properties of the surfaces in this manner, the liquid can be confined to certain areas defined by the hydrophilic pathways. This technique eliminates the need for alignment of the two surfaces. Patterned plasma-polymerized hexafluoropropene constitutes the hydrophobic areas, whereas the untreated glass surface constitutes the hydrophilic pathways. We developed a theoretical model of the capillary flow and obtained analytical solutions which are in good agreement with the experimental results. The capillarity-driven microflow system was also used to pattern and immobilize biological material on planar substrates: well-defined 200 microm wide strips of human cells (HeLa) and fluorescence labelled proteins (fluorescein isothiocyanate-labelled bovine serum albumin, i.e., FITC-BSA) were fabricated using the capillary flow system presented here.
Topological invariant and cotranslational symmetry in strongly interacting multi-magnon systems
NASA Astrophysics Data System (ADS)
Qin, Xizhou; Mei, Feng; Ke, Yongguan; Zhang, Li; Lee, Chaohong
2018-01-01
It is still an outstanding challenge to characterize and understand the topological features of strongly interacting states such as bound states in interacting quantum systems. Here, by introducing a cotranslational symmetry in an interacting multi-particle quantum system, we systematically develop a method to define a Chern invariant, which is a generalization of the well-known Thouless-Kohmoto-Nightingale-den Nijs invariant, for identifying strongly interacting topological states. As an example, we study the topological multi-magnon states in a generalized Heisenberg XXZ model, which can be realized by the currently available experiment techniques of cold atoms (Aidelsburger et al 2013 Phys. Rev. Lett. 111, 185301; Miyake et al 2013 Phys. Rev. Lett. 111, 185302). Through calculating the two-magnon excitation spectrum and the defined Chern number, we explore the emergence of topological edge bound states and give their topological phase diagram. We also analytically derive an effective single-particle Hofstadter superlattice model for a better understanding of the topological bound states. Our results not only provide a new approach to defining a topological invariant for interacting multi-particle systems, but also give insights into the characterization and understanding of strongly interacting topological states.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core
Hucka, Michael; Bergmann, Frank T.; Hoops, Stefan; Keating, Sarah M.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528564
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Ductile film delamination from compliant substrates using hard overlayers
Cordill, M.J.; Marx, V.M.; Kirchlechner, C.
2014-01-01
Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems. PMID:25641995
Ductile film delamination from compliant substrates using hard overlayers.
Cordill, M J; Marx, V M; Kirchlechner, C
2014-11-28
Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems.
Ong, M L; Ng, E Y K
2005-12-01
In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.
Annotti, Lee A; Teglasi, Hedwig
2017-01-01
Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.
Sharif Razavian, Reza; Mehrabi, Naser; McPhee, John
2015-01-01
This paper presents a new model-based method to define muscle synergies. Unlike the conventional factorization approach, which extracts synergies from electromyographic data, the proposed method employs a biomechanical model and formally defines the synergies as the solution of an optimal control problem. As a result, the number of required synergies is directly related to the dimensions of the operational space. The estimated synergies are posture-dependent, which correlate well with the results of standard factorization methods. Two examples are used to showcase this method: a two-dimensional forearm model, and a three-dimensional driver arm model. It has been shown here that the synergies need to be task-specific (i.e., they are defined for the specific operational spaces: the elbow angle and the steering wheel angle in the two systems). This functional definition of synergies results in a low-dimensional control space, in which every force in the operational space is accurately created by a unique combination of synergies. As such, there is no need for extra criteria (e.g., minimizing effort) in the process of motion control. This approach is motivated by the need for fast and bio-plausible feedback control of musculoskeletal systems, and can have important implications in engineering, motor control, and biomechanics. PMID:26500530
Postural effects on intracranial pressure: modeling and clinical evaluation.
Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders
2013-11-01
The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.
Multiple tipping points and optimal repairing in interacting networks
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Variations in AmLi source spectra and their estimation utilizing the 5 Ring Multiplicity Counter
Weinmann-Smith, Robert; Beddingfield, David H.; Enqvist, Andreas; ...
2017-02-28
Active-mode assay systems are widely used for the safeguards of uranium items to verify compliance with the Non-Proliferation Treaty. Systems such as the Active-Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL) use americium-lithium (AmLi) neutron sources to induce fissions which are measured to determine the sample mass. These systems have historically relied on calibrations derived from well-defined standards. Recently, restricted access to standards or more difficult measurements have resulted in a reliance on modeling and simulation for the calibration of systems, which introduces potential simulation biases. Furthermore, the AmLi source energy spectra commonly used in the safeguardsmore » community do not accurately represent measurement results and the spectrum uncertainty can represent a large contribution to the total modeling uncertainty in active-mode systems.« less
NASA Astrophysics Data System (ADS)
Del Pino, S.; Labourasse, E.; Morel, G.
2018-06-01
We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.
Quantification of correlations in quantum many-particle systems.
Byczuk, Krzysztof; Kuneš, Jan; Hofstetter, Walter; Vollhardt, Dieter
2012-02-24
We introduce a well-defined and unbiased measure of the strength of correlations in quantum many-particle systems which is based on the relative von Neumann entropy computed from the density operator of correlated and uncorrelated states. The usefulness of this general concept is demonstrated by quantifying correlations of interacting electrons in the Hubbard model and in a series of transition-metal oxides using dynamical mean-field theory.
MALDI-TOF MS of Trichoderma: A model system for the identification of microfungi
USDA-ARS?s Scientific Manuscript database
This investigation aimed to assess whether MALDI-TOF MS analysis of proteomics could be applied to the study of Trichoderma, a fungal genus selected because it includes many species and is phylogenetically well defined. We also investigated whether MALDI-TOF MS analysis of proteomics would reveal ap...
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.
Solid state SPS microwave generation and transmission study. Volume 2, phase 2: Appendices
NASA Technical Reports Server (NTRS)
Maynard, O. E.
1980-01-01
The solid state sandwich concept for SPS was further defined. The design effort concentrated on the spacetenna, but did include some system analysis for parametric comparison reasons. Basic solid state microwave devices were defined and modeled. An initial conceptual subsystems and system design was performed as well as sidelobe control and system selection. The selected system concept and parametric solid state microwave power transmission system data were assessed relevant to the SPS concept. Although device efficiency was not a goal, the sensitivities to design of this efficiency were parametrically treated. Sidelobe control consisted of various single step tapers, multistep tapers and Gaussian tapers. A hybrid concept using tubes and solid state was evaluated. Thermal analyses are included with emphasis on sensitivities to waste heat radiator form factor, emissivity, absorptivity, amplifier efficiency, material and junction temperature.
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2014-01-01
The Global Change Information System (GCIS) provides a framework for the formal representation of structured metadata about data and information about global change. The pilot deployment of the system supports the National Climate Assessment (NCA), a major report of the U.S. Global Change Research Program (USGCRP). A consumer of that report can use the system to browse and explore that supporting information. Additionally, capturing that information into a structured data model and presenting it in standard formats through well defined open inter- faces, including query interfaces suitable for data mining and linking with other databases, the information becomes valuable for other analytic uses as well.
Generic Sensor Failure Modeling for Cooperative Systems
Jäger, Georg; Zug, Sebastian
2018-01-01
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
Launch and Landing Effects Ground Operations (LLEGO) Model
NASA Technical Reports Server (NTRS)
2008-01-01
LLEGO is a model for understanding recurring launch and landing operations costs at Kennedy Space Center for human space flight. Launch and landing operations are often referred to as ground processing, or ground operations. Currently, this function is specific to the ground operations for the Space Shuttle Space Transportation System within the Space Shuttle Program. The Constellation system to follow the Space Shuttle consists of the crewed Orion spacecraft atop an Ares I launch vehicle and the uncrewed Ares V cargo launch vehicle. The Constellation flight and ground systems build upon many elements of the existing Shuttle flight and ground hardware, as well as upon existing organizations and processes. In turn, the LLEGO model builds upon past ground operations research, modeling, data, and experience in estimating for future programs. Rather than to simply provide estimates, the LLEGO model s main purpose is to improve expenses by relating complex relationships among functions (ground operations contractor, subcontractors, civil service technical, center management, operations, etc.) to tangible drivers. Drivers include flight system complexity and reliability, as well as operations and supply chain management processes and technology. Together these factors define the operability and potential improvements for any future system, from the most direct to the least direct expenses.
Wave-Sediment Interaction in Muddy Environments: A Field Experiment
2007-01-01
in Years 1 and 2 (2007-2008) and a data analysis and modeling effort in Year 3 (2009). 2. “A System for Monitoring Wave-Sediment Interaction in...project was to conduct a pilot field experiment to test instrumentation and data analysis procedures for the major field experiment effort scheduled in...Chou et al., 1993; Foda et al., 1993). With the exception of liquefaction processes, these models assume a single, well- defined mud phase
An industrial information integration approach to in-orbit spacecraft
NASA Astrophysics Data System (ADS)
Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng
2017-01-01
To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.
Preliminary evaluation of the hydrogeologic system in Owens Valley, California
Danskin, W.R.
1988-01-01
A preliminary, two-layer, steady-state, groundwater flow model was used to evaluate present data and hydrologic concepts of Owens Valley, California. Simulations of the groundwater system indicate that areas where water levels are most affected by changes in recharge and discharge are near toes of alluvial fans and along the edge of permeable volcanic deposits. Sensitivity analysis for each model parameter shows that steady state simulations are most sensitive to uncertainties in evapotranspiration rates. Tungsten Hills, Poverty Hills, and Alabama Hills were found to act as virtually impermeable barriers to groundwater flow. Accurate simulation of the groundwater system between Bishop and Lone Pine appears to be possible without simulating the groundwater system in Round Valley, near Owens Lake, or in aquifer materials more than 1,000 ft below land surface. Although vast amounts of geologic and hydrologic data have been collected for Owens Valley, many parts of the hydrogeologic system have not been defined with sufficient detail to answer present water management questions. Location and extent of geologic materials that impede the vertical movement of water are poorly documented. The likely range of aquifer characteristics, except vertical hydraulic conductivity, is well known, but spatial distribution of these characteristics is not well documented. A set of consistent water budgets is needed, including one for surface water, groundwater, and the entire valley. The largest component of previous water budgets (evapotranspiration) is largely unverified. More definitive estimates of local gains and losses for Owens River are needed. Although groundwater pumpage from each well is measured, the quantity of withdrawal from different zones of permeable material has not been defined. (USGS)
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Testing for ontological errors in probabilistic forecasting models of natural systems
Marzocchi, Warner; Jordan, Thomas H.
2014-01-01
Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265
Non-thermal transitions in a model inspired by moral decisions
NASA Astrophysics Data System (ADS)
Alamino, Roberto C.
2016-08-01
This work introduces a model in which agents of a network act upon one another according to three different kinds of moral decisions. These decisions are based on an increasing level of sophistication in the empathy capacity of the agent, a hierarchy which we name Piaget’s ladder. The decision strategy of the agents is non-rational, in the sense they are arbitrarily fixed, and the model presents quenched disorder given by the distribution of its defining parameters. An analytical solution for this model is obtained in the large system limit as well as a leading order correction for finite-size systems which shows that typical realisations of the model develop a phase structure with both continuous and discontinuous non-thermal transitions.
Remaining lifetime modeling using State-of-Health estimation
NASA Astrophysics Data System (ADS)
Beganovic, Nejra; Söffker, Dirk
2017-08-01
Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.
Emerging Technologies to Create Inducible and Genetically Defined Porcine Cancer Models.
Schook, Lawrence B; Rund, Laurie; Begnini, Karine R; Remião, Mariana H; Seixas, Fabiana K; Collares, Tiago
2016-01-01
There is an emerging need for new animal models that address unmet translational cancer research requirements. Transgenic porcine models provide an exceptional opportunity due to their genetic, anatomic, and physiological similarities with humans. Due to recent advances in the sequencing of domestic animal genomes and the development of new organism cloning technologies, it is now very feasible to utilize pigs as a malleable species, with similar anatomic and physiological features with humans, in which to develop cancer models. In this review, we discuss genetic modification technologies successfully used to produce porcine biomedical models, in particular the Cre-loxP System as well as major advances and perspectives the CRISPR/Cas9 System. Recent advancements in porcine tumor modeling and genome editing will bring porcine models to the forefront of translational cancer research.
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.
Using Petri Net Tools to Study Properties and Dynamics of Biological Systems
Peleg, Mor; Rubin, Daniel; Altman, Russ B.
2005-01-01
Petri Nets (PNs) and their extensions are promising methods for modeling and simulating biological systems. We surveyed PN formalisms and tools and compared them based on their mathematical capabilities as well as by their appropriateness to represent typical biological processes. We measured the ability of these tools to model specific features of biological systems and answer a set of biological questions that we defined. We found that different tools are required to provide all capabilities that we assessed. We created software to translate a generic PN model into most of the formalisms and tools discussed. We have also made available three models and suggest that a library of such models would catalyze progress in qualitative modeling via PNs. Development and wide adoption of common formats would enable researchers to share models and use different tools to analyze them without the need to convert to proprietary formats. PMID:15561791
10 Steps to Building an Architecture for Space Surveillance Projects
NASA Astrophysics Data System (ADS)
Gyorko, E.; Barnhart, E.; Gans, H.
Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.
Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2014-01-01
Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.
Biocharts: a visual formalism for complex biological systems
Kugler, Hillel; Larjo, Antti; Harel, David
2010-01-01
We address one of the central issues in devising languages, methods and tools for the modelling and analysis of complex biological systems, that of linking high-level (e.g. intercellular) information with lower-level (e.g. intracellular) information. Adequate ways of dealing with this issue are crucial for understanding biological networks and pathways, which typically contain huge amounts of data that continue to grow as our knowledge and understanding of a system increases. Trying to comprehend such data using the standard methods currently in use is often virtually impossible. We propose a two-tier compound visual language, which we call Biocharts, that is geared towards building fully executable models of biological systems. One of the main goals of our approach is to enable biologists to actively participate in the computational modelling effort, in a natural way. The high-level part of our language is a version of statecharts, which have been shown to be extremely successful in software and systems engineering. The statecharts can be combined with any appropriately well-defined language (preferably a diagrammatic one) for specifying the low-level dynamics of the pathways and networks. We illustrate the language and our general modelling approach using the well-studied process of bacterial chemotaxis. PMID:20022895
Improving care for patients on antiretroviral therapy through a gap analysis framework.
Massoud, M Rashad; Shakir, Fazila; Livesley, Nigel; Muhire, Martin; Nabwire, Juliana; Ottosson, Amanda; Jean-Baptiste, Rachel; Megere, Humphrey; Karamagi-Nkolo, Esther; Gaudreault, Suzanne; Marks, Pamela; Jennings, Larissa
2015-07-01
To improve quality of care through decreasing existing gaps in the areas of coverage, retention, and wellness of patients receiving HIV care and treatment. The antiretroviral therapy (ART) Framework utilizes improvement methods and the Chronic Care Model to address the coverage, retention, and wellness gaps in HIV care and treatment. This is a time-series study. The ART Framework was applied in five health centers in Buikwe District, Uganda. Quality improvement teams, consisting of healthcare workers and expert patients, were established in each of the five healthcare facilities. The intervention period was October 2010 to September 2012. It consisted of quality improvement teams analyzing their facility and systems of care from the perspective of the Chronic Care Model to identify areas of improvement. They implemented the ART Framework, collected data and assessed outcomes, focused on self-management support for patients, to improve coverage, retention, and wellness gaps in HIV care and treatment. Coverage was defined as every patient who needs ART in the catchment area, receives it. Retention was defined as every patient who receives ART stays on ART, and wellness defined as having a positive clinical, immunological, and/or virological response to treatment without intolerable or unmanageable side-effects. Results from Buikwe show the gaps in coverage, retention, and wellness greatly decreased a gap in coverage of 44-19%, gap in retention of 49-24%, and gap in wellness of 53-14% during a 2-year intervention period. The ART Framework is an innovative and practical tool for HIV program managers to improve HIV care and treatment.
Principles for the dynamic maintenance of cortical polarity
Marco, Eugenio; Wedlich-Soldner, Roland; Li, Rong; Altschuler, Steven J.; Wu, Lani F.
2007-01-01
Summary Diverse cell types require the ability to dynamically maintain polarized membrane protein distributions through balancing transport and diffusion. However, design principles underlying dynamically maintained cortical polarity are not well understood. Here we constructed a mathematical model for characterizing the morphology of dynamically polarized protein distributions. We developed analytical approaches for measuring all model parameters from single-cell experiments. We applied our methods to a well-characterized system for studying polarized membrane proteins: budding yeast cells expressing activated Cdc42. We found that balanced diffusion and colocalized transport to and from the plasma membrane were sufficient for accurately describing polarization morphologies. Surprisingly, the model predicts that polarized regions are defined with a precision that is nearly optimal for measured transport rates, and that polarity can be dynamically stabilized through positive feedback with directed transport. Our approach provides a step towards understanding how biological systems shape spatially precise, unambiguous cortical polarity domains using dynamic processes. PMID:17448998
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Landslide risk mitigation by means of early warning systems
NASA Astrophysics Data System (ADS)
Calvello, Michele
2017-04-01
Among the many options available to mitigate landslide risk, early warning systems may be used where, in specific circumstances, the risk to life increases above tolerable levels. A coherent framework to classify and analyse landslide early warning systems (LEWS) is herein presented. Once the objectives of an early warning strategy are defined depending on the scale of analysis and the type of landslides to address, the process of designing and managing a LEWS should synergically employ technical and social skills. A classification scheme for the main components of LEWSs is proposed for weather-induced landslides. The scheme is based on a clear distinction among: i) the landslide model, i.e. a functional relationship between weather characteristics and landslide events considering the geotechnical, geomorphological and hydro-geological characterization of the area as well as an adequate monitoring strategy; ii) the warning model, i.e. the landslide model plus procedures to define the warning events and to issue the warnings; iii) the warning system, i.e. the warning model plus warning dissemination procedures, communication and education tools, strategies for community involvement and emergency plans. Each component of a LEWS is related to a number of actors involved with their deployment, operational activities and management. For instance, communication and education, community involvement and emergency plans are all significantly influenced by people's risk perception and by operational aspects system managers need to address in cooperation with scientists.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Towards a Multiscale Approach to Cybersecurity Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less
Hierarchical resilience with lightweight threads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Kyle Bruce
2011-10-01
This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less
Lie algebraic similarity transformed Hamiltonians for lattice model systems
NASA Astrophysics Data System (ADS)
Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2015-01-01
We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.
What does it mean to be musical?
Levitin, Daniel J
2012-02-23
Music can be seen as a model system for understanding gene × environment interactions and how these can influence neurocognitive development. The concept of musicality, however, is underspecified and not well understood. Here, I propose a framework for defining musicality to provide a foundation for studying the contributions of biological and environmental factors. Copyright © 2012 Elsevier Inc. All rights reserved.
A Novel Bioreactor System for the Assessment of Endothelialization on Deformable Surfaces
Bachmann, Björn J.; Bernardi, Laura; Loosli, Christian; Marschewski, Julian; Perrini, Michela; Ehrbar, Martin; Ermanni, Paolo; Poulikakos, Dimos; Ferrari, Aldo; Mazza, Edoardo
2016-01-01
The generation of a living protective layer at the luminal surface of cardiovascular devices, composed of an autologous functional endothelium, represents the ideal solution to life-threatening, implant-related complications in cardiovascular patients. The initial evaluation of engineering strategies fostering endothelial cell adhesion and proliferation as well as the long-term tissue homeostasis requires in vitro testing in environmental model systems able to recapitulate the hemodynamic conditions experienced at the blood-to-device interface of implants as well as the substrate deformation. Here, we introduce the design and validation of a novel bioreactor system which enables the long-term conditioning of human endothelial cells interacting with artificial materials under dynamic combinations of flow-generated wall shear stress and wall deformation. The wall shear stress and wall deformation values obtained encompass both the physiological and supraphysiological range. They are determined through separate actuation systems which are controlled based on validated computational models. In addition, we demonstrate the good optical conductivity of the system permitting online monitoring of cell activities through live-cell imaging as well as standard biochemical post-processing. Altogether, the bioreactor system defines an unprecedented testing hub for potential strategies toward the endothelialization or re-endothelialization of target substrates. PMID:27941901
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
The current status of research on the structure of evaluative space
Norris, Catherine J.; Gollan, Jackie; Berntson, Gary G.; Cacioppo, John T.
2009-01-01
The structure of evaluative space shapes emotional life. Although behavior may be constrained to a single bipolar dimension, for example as defined by the opposing movements of approach and withdrawal, the mechanisms underlying the affect system must be capable of an astonishing range of emotional experience and expression. The model of evaluative space (ESM; J. T. Cacioppo, W. L. Gardner, & G. G. Berntson, 1997, 1999) proposes that behavioral predispositions are the ultimate output of the affect system, which is defined by operating characteristics that differ both for positivity and negativity, as well as across levels of the nervous system. In this article, we outline the current status of theory and research on the structure of evaluative space. First, we summarize the basic tenets of the model, as well as recent research supporting these ideas and counterarguments that have been raised by other theorists. To address these counterarguments, we discuss the postulates of affective oscillation and calibration, two mechanistic features of the affect system proposed to underlie the durability and adaptability of affect. We summarize empirical support for the functional consequences of the principles of affective oscillation and calibration, with a focus on how oscillation and the “stickiness” of affect can lead to the emergence of ambivalence, whereas affective calibration and the flexibility of the affect system produce asymmetries in affective processing (e.g., the negativity bias). Finally, we consider the clinical implications of disorder in the structure of evaluative space for the comprehension and treatment of depression and anxiety. PMID:20346389
A fuzzy logic expert system for evaluating policy progress towards sustainability goals.
Cisneros-Montemayor, Andrés M; Singh, Gerald G; Cheung, William W L
2017-12-16
Evaluating progress towards environmental sustainability goals can be difficult due to a lack of measurable benchmarks and insufficient or uncertain data. Marine settings are particularly challenging, as stakeholders and objectives tend to be less well defined and ecosystem components have high natural variability and are difficult to observe directly. Fuzzy logic expert systems are useful analytical frameworks to evaluate such systems, and we develop such a model here to formally evaluate progress towards sustainability targets based on diverse sets of indicators. Evaluation criteria include recent (since policy enactment) and historical (from earliest known state) change, type of indicators (state, benefit, pressure, response), time span and spatial scope, and the suitability of an indicator in reflecting progress toward a specific objective. A key aspect of the framework is that all assumptions are transparent and modifiable to fit different social and ecological contexts. We test the method by evaluating progress towards four Aichi Biodiversity Targets in Canadian oceans, including quantitative progress scores, information gaps, and the sensitivity of results to model and data assumptions. For Canadian marine systems, national protection plans and biodiversity awareness show good progress, but species and ecosystem states overall do not show strong improvement. Well-defined goals are vital for successful policy implementation, as ambiguity allows for conflicting potential indicators, which in natural systems increases uncertainty in progress evaluations. Importantly, our framework can be easily adapted to assess progress towards policy goals with different themes, globally or in specific regions.
Operational Space Weather Activities in the US
NASA Astrophysics Data System (ADS)
Berger, Thomas; Singer, Howard; Onsager, Terrance; Viereck, Rodney; Murtagh, William; Rutledge, Robert
2016-07-01
We review the current activities in the civil operational space weather forecasting enterprise of the United States. The NOAA/Space Weather Prediction Center is the nation's official source of space weather watches, warnings, and alerts, working with partners in the Air Force as well as international operational forecast services to provide predictions, data, and products on a large variety of space weather phenomena and impacts. In October 2015, the White House Office of Science and Technology Policy released the National Space Weather Strategy (NSWS) and associated Space Weather Action Plan (SWAP) that define how the nation will better forecast, mitigate, and respond to an extreme space weather event. The SWAP defines actions involving multiple federal agencies and mandates coordination and collaboration with academia, the private sector, and international bodies to, among other things, develop and sustain an operational space weather observing system; develop and deploy new models of space weather impacts to critical infrastructure systems; define new mechanisms for the transition of research models to operations and to ensure that the research community is supported for, and has access to, operational model upgrade paths; and to enhance fundamental understanding of space weather through support of research models and observations. The SWAP will guide significant aspects of space weather operational and research activities for the next decade, with opportunities to revisit the strategy in the coming years through the auspices of the National Science and Technology Council.
Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas
2015-01-01
Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID:25653655
NASA Astrophysics Data System (ADS)
Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.
2017-12-01
The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cirilo Antonio, N.; Manojlovic, N.; Departamento de Matematica, FCT, Universidade do Algarve, Campus de Gambelas, 8005-139 Faro
sl{sub 2} Gaudin model with jordanian twist is studied. This system can be obtained as the semiclassical limit of the XXX spin chain deformed by the jordanian twist. The appropriate creation operators that yield the Bethe states of the Gaudin model and consequently its spectrum are defined. Their commutation relations with the generators of the corresponding loop algebra as well as with the generating function of integrals of motion are given. The inner products and norms of Bethe states and the relation to the solutions of the Knizhnik-Zamolodchikov equations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skrinak, V.M.
The Eastern Devonian Gas Shales Technology Review is a technology transfer vehicle designed to keep industry and research organizations aware of major happenings in the shales. Four issues were published, and the majority of the readership was found to be operators. Under the other major task in this project, areal and analytic analyses of the basin resulted in reducing the study area by 30% while defining a rectangular coordinate system for the basin. Shale-well cost and economic models were developed and validated, and a simplified flow model was prepared.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Emerging Technologies to Create Inducible and Genetically Defined Porcine Cancer Models
Schook, Lawrence B.; Rund, Laurie; Begnini, Karine R.; Remião, Mariana H.; Seixas, Fabiana K.; Collares, Tiago
2016-01-01
There is an emerging need for new animal models that address unmet translational cancer research requirements. Transgenic porcine models provide an exceptional opportunity due to their genetic, anatomic, and physiological similarities with humans. Due to recent advances in the sequencing of domestic animal genomes and the development of new organism cloning technologies, it is now very feasible to utilize pigs as a malleable species, with similar anatomic and physiological features with humans, in which to develop cancer models. In this review, we discuss genetic modification technologies successfully used to produce porcine biomedical models, in particular the Cre-loxP System as well as major advances and perspectives the CRISPR/Cas9 System. Recent advancements in porcine tumor modeling and genome editing will bring porcine models to the forefront of translational cancer research. PMID:26973698
A Layered Decision Model for Cost-Effective System Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Huaqiang; Alves-Foss, James; Soule, Terry
System security involves decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defence strategies, and implementation of real-time defence tactics. Although choices made in each of these areas affect the others, existing decision models typically handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defence decisions result in cost-effective solutions. To address these problems, this paper introduces a Layered Decision Model (LDM) for use inmore » deciding how to address defence decisions based on their cost-effectiveness. To validate the LDM and illustrate how it is used, we used simulation to test model rationality and applied the LDM to the design of system security for an e-commercial business case.« less
Crisis Management Systems: A Case Study for Aspect-Oriented Modeling
NASA Astrophysics Data System (ADS)
Kienzle, Jörg; Guelfi, Nicolas; Mustafiz, Sadaf
The intent of this document is to define a common case study for the aspect-oriented modeling research community. The domain of the case study is crisis management systems, i.e., systems that help in identifying, assessing, and handling a crisis situation by orchestrating the communication between all parties involved in handling the crisis, by allocating and managing resources, and by providing access to relevant crisis-related information to authorized users. This document contains informal requirements of crisis management systems (CMSs) in general, a feature model for a CMS product line, use case models for a car crash CMS (CCCMS), a domain model for the CCCMS, an informal physical architecture description of the CCCMS, as well as some design models of a possible object-oriented implementation of parts of the CCCMS backend. AOM researchers who want to demonstrate the power of their AOM approach or technique can hence apply the approach at the most appropriate level of abstraction.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Elements Required for an Efficient NADP-Malic Enzyme Type C4 Photosynthesis1[C][W][OPEN
Wang, Yu; Long, Stephen P.; Zhu, Xin-Guang
2014-01-01
C4 photosynthesis has higher light, nitrogen, and water use efficiencies than C3 photosynthesis. Although the basic anatomical, cellular, and biochemical features of C4 photosynthesis are well understood, the quantitative significance of each element of C4 photosynthesis to the high photosynthetic efficiency are not well defined. Here, we addressed this question by developing and using a systems model of C4 photosynthesis, which includes not only the Calvin-Benson cycle, starch synthesis, sucrose synthesis, C4 shuttle, and CO2 leakage, but also photorespiration and metabolite transport between the bundle sheath cells and mesophyll cells. The model effectively simulated the CO2 uptake rates, and the changes of metabolite concentrations under varied CO2 and light levels. Analyses show that triose phosphate transport and CO2 leakage can help maintain a high photosynthetic rate by balancing ATP and NADPH amounts in bundle sheath cells and mesophyll cells. Finally, we used the model to define the optimal enzyme properties and a blueprint for C4 engineering. As such, this model provides a theoretical framework for guiding C4 engineering and studying C4 photosynthesis in general. PMID:24521879
Physical properties of the benchmark models program supercritical wing
NASA Technical Reports Server (NTRS)
Dansberry, Bryan E.; Durham, Michael H.; Bennett, Robert M.; Turnock, David L.; Silva, Walter A.; Rivera, Jose A., Jr.
1993-01-01
The goal of the Benchmark Models Program is to provide data useful in the development and evaluation of aeroelastic computational fluid dynamics (CFD) codes. To that end, a series of three similar wing models are being flutter tested in the Langley Transonic Dynamics Tunnel. These models are designed to simultaneously acquire model response data and unsteady surface pressure data during wing flutter conditions. The supercritical wing is the second model of this series. It is a rigid semispan model with a rectangular planform and a NASA SC(2)-0414 supercritical airfoil shape. The supercritical wing model was flutter tested on a flexible mount, called the Pitch and Plunge Apparatus, that provides a well-defined, two-degree-of-freedom dynamic system. The supercritical wing model and associated flutter test apparatus is described and experimentally determined wind-off structural dynamic characteristics of the combined rigid model and flexible mount system are included.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jihua; Alonzo, Jose; Yu, Xiang
2013-09-24
Well-defined conjugated polymers in confined geometries are challenging to synthesize and characterize, yet they are potentially useful in a broad range of organic optoelectronic devices such as transistors, light emitting diodes, solar cells, sensors, and nanocircuits. We report a systematic study of optoelectrical properties, grafting density effects, and nanopatterning of a model, end-tethered conjugated polymer system. Specifically, poly(para-phenylene) (PPP) brushes of various grafting density are created in situ by aromatizing well-defined, end-tethered poly(1,3-cyclohexadiene) (PCHD) “precursor brushes”. Furthermore, this novel precursor brush approach provides a convenient way to make and systematically control the grafting density of high molecular weight conjugated polymermore » brushes that would otherwise be insoluble. Finally, this allows us to examine how grafting density impacts the effective conjugation length of the conjugated PPP brushes and to adapt the fabrication method to develop spatially patterned conjugated brush systems, which is important for practical applications of conjugated polymer brushes.« less
Defining a Model for Mitochondrial Function in mESC Differentiation
Defining a Model for Mitochondrial Function in mESC DifferentiationDefining a Model for Mitochondrial Function in mESC Differentiation Differentiating embryonic stem cells (ESCs) undergo mitochondrial maturation leading to a switch from a system dependent upon glycolysis to a re...
Finite-time consensus for controlled dynamical systems in network
NASA Astrophysics Data System (ADS)
Zoghlami, Naim; Mlayeh, Rhouma; Beji, Lotfi; Abichou, Azgal
2018-04-01
The key challenges in networked dynamical systems are the component heterogeneities, nonlinearities, and the high dimension of the formulated vector of state variables. In this paper, the emphasise is put on two classes of systems in network include most controlled driftless systems as well as systems with drift. For each model structure that defines homogeneous and heterogeneous multi-system behaviour, we derive protocols leading to finite-time consensus. For each model evolving in networks forming a homogeneous or heterogeneous multi-system, protocols integrating sufficient conditions are derived leading to finite-time consensus. Likewise, for the networking topology, we make use of fixed directed and undirected graphs. To prove our approaches, finite-time stability theory and Lyapunov methods are considered. As illustrative examples, the homogeneous multi-unicycle kinematics and the homogeneous/heterogeneous multi-second order dynamics in networks are studied.
SysML: A Language for Space System Engineering
NASA Astrophysics Data System (ADS)
Mazzini, S.; Strangapede, A.
2008-08-01
This paper presents the results of an ESA/ESTEC internal study, performed with the support of INTECS, about modeling languages to support Space System Engineering activities and processes, with special emphasis on system requirements identification and analysis. The study was focused on the assessment of dedicated UML profiles, their positioning alongside the system and software life cycles and associated methodologies. Requirements for a Space System Requirements Language were identified considering the ECSS-E-10 and ECSS-E_40 processes. The study has identified SysML as a very promising language, having as theoretical background the reference system processes defined by the ISO15288, as well as industrial practices.
The commercial implications of the EELV program
NASA Astrophysics Data System (ADS)
Sasso, Steven E.
1998-01-01
There have been several studies over the past 15 years intended to define and develop a space launch system that would meet future needs of the United States Government (USG). While these past studies (Advanced Launch System, National Launch System, Spacelifter, etc) yielded valuable data, none were carried to fruition. Overriding issues included high development cost, changing requirements, and uncertainty in the mission model, as well lack of a clear direction for where this nation should be headed. In 1995, the Air Force embarked on the Evolved Expendable Launch Vehicle (EELV) program as a way of defining and developing the next-generation expendable launch system. This time groundrules for this effort were clearly defined-the program relied on the use of evolving a system rather than developing a high-technology solution to reduce development cost, and the commercial market was factored in as a way of reducing cost to the USG. The EELV program is nearing the engineering manufacturing development (EMD) phase by mid-1998 with first flight planned for early 2001. This paper describes the planned Lockheed Martin EELV program and its ability to utilize the commercial market to benefit the USG in its need to develop the next-generation expendable launch vehicle.
State Event Models for the Formal Analysis of Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles
2014-01-01
The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.
SCA Waveform Development for Space Telemetry
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.
2004-01-01
The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.
Upton, J; Murphy, M; Shalloo, L; Groot Koerkamp, P W G; De Boer, I J M
2014-01-01
Our objective was to define and demonstrate a mechanistic model that enables dairy farmers to explore the impact of a technical or managerial innovation on electricity consumption, associated CO2 emissions, and electricity costs. We, therefore, (1) defined a model for electricity consumption on dairy farms (MECD) capable of simulating total electricity consumption along with related CO2 emissions and electricity costs on dairy farms on a monthly basis; (2) validated the MECD using empirical data of 1yr on commercial spring calving, grass-based dairy farms with 45, 88, and 195 milking cows; and (3) demonstrated the functionality of the model by applying 2 electricity tariffs to the electricity consumption data and examining the effect on total dairy farm electricity costs. The MECD was developed using a mechanistic modeling approach and required the key inputs of milk production, cow number, and details relating to the milk-cooling system, milking machine system, water-heating system, lighting systems, water pump systems, and the winter housing facilities as well as details relating to the management of the farm (e.g., season of calving). Model validation showed an overall relative prediction error (RPE) of less than 10% for total electricity consumption. More than 87% of the mean square prediction error of total electricity consumption was accounted for by random variation. The RPE values of the milk-cooling systems, water-heating systems, and milking machine systems were less than 20%. The RPE values for automatic scraper systems, lighting systems, and water pump systems varied from 18 to 113%, indicating a poor prediction for these metrics. However, automatic scrapers, lighting, and water pumps made up only 14% of total electricity consumption across all farms, reducing the overall impact of these poor predictions. Demonstration of the model showed that total farm electricity costs increased by between 29 and 38% by moving from a day and night tariff to a flat tariff. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Caenorhabditis elegans in regenerative medicine: a simple model for a complex discipline.
Aitlhadj, Layla; Stürzenbaum, Stephen R
2014-06-01
Stem cell research is a major focus of regenerative medicine, which amalgamates diverse disciplines ranging from developmental cell biology to chemical and genetic therapy. Although embryonic stem cells have provided the foundation of stem cell therapy, they offer an in vitro study system that might not provide the best insight into mechanisms and behaviour of cells within living organisms. Caenorhabditis elegans is a well defined model organism with highly conserved cell development and signalling processes that specify cell fate. Its genetic amenability coupled with its chemical screening applicability make the nematode well suited as an in vivo system in which regenerative therapy and stem cell processes can be explored. Here, we describe some of the major advances in stem cell research from the worm's perspective. Copyright © 2014 Elsevier Ltd. All rights reserved.
Blood-Siegfried, Jane
2015-01-01
Sudden infant death syndrome (SIDS) is still not well understood. It is defined as the sudden and unexpected death of an infant without a definitive cause. There are numerous hypotheses about the etiology of SIDS but the exact cause or causes have never been pinpointed. Examination of theoretical pathologies might only be possible in animal models. Development of these models requires consideration of the environmental and/or developmental risk factors often associated with SIDS, as they need to explain how the risk factors could contribute to the cause of death. These models were initially developed in common laboratory animals to test various hypotheses to explain these infant deaths - guinea pig, piglet, mouse, neonatal rabbit, and neonatal rat. Currently, there are growing numbers of researchers using genetically altered animals to examine specific areas of interest. This review describes the different systems and models developed to examine the diverse hypotheses for the cause of SIDS and their potential for defining a causal mechanism or mechanisms.
2012-03-22
Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology Air University...Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree of Master of Science in Operations...this project was well defined. I would also like to thank my reader, Dr. Joseph Pignatiello, for his technical insights and helpful comments. Thanks
Pharmacokinetic Modeling of JP-8 Jet Fuel Components: II. A Conceptual Framework
2003-12-01
example, a single type of (simple) binary interaction between 300 components would require the specification of some 105 interaction coefficients . One...individual substances, via binary mechanisms, is enough to predict the interactions present in the mixture. Secondly, complex mixtures can often be...approximated as pseudo- binary systems, consisting of the compound of interest plus a single interacting complex vehicle with well-defined, composite
Modeling and Performance Estimation for Airborne Minefield Detection System
2008-05-01
Difference Vegetation Index ( NDVI ) NDVI is defined as: NDVI = (NIR – RED)/ (NIR + RED...to minimize the effect of variable irradiance levels. NDVI is always bounded between -1 and 1. A higher positive value of NDVI indicates the...lakes, and rivers) which has low reflectance in both NIR as well as visible bands, results in very low positive or slightly negative NDVI values
Roadmap for cardiovascular circulation model.
Safaei, Soroush; Bradley, Christopher P; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R; Omholt, Stig W; Chase, J Geoffrey; Müller, Lucas O; Watanabe, Sansuke M; Blanco, Pablo J; de Bono, Bernard; Hunter, Peter J
2016-12-01
Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well-established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo-skeletal system. The computational infrastructure for the cardiovascular model should provide for near real-time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Wu, Zujian; Pang, Wei; Coghill, George M
Computational modelling of biochemical systems based on top-down and bottom-up approaches has been well studied over the last decade. In this research, after illustrating how to generate atomic components by a set of given reactants and two user pre-defined component patterns, we propose an integrative top-down and bottom-up modelling approach for stepwise qualitative exploration of interactions among reactants in biochemical systems. Evolution strategy is applied to the top-down modelling approach to compose models, and simulated annealing is employed in the bottom-up modelling approach to explore potential interactions based on models constructed from the top-down modelling process. Both the top-down and bottom-up approaches support stepwise modular addition or subtraction for the model evolution. Experimental results indicate that our modelling approach is feasible to learn the relationships among biochemical reactants qualitatively. In addition, hidden reactants of the target biochemical system can be obtained by generating complex reactants in corresponding composed models. Moreover, qualitatively learned models with inferred reactants and alternative topologies can be used for further web-lab experimental investigations by biologists of interest, which may result in a better understanding of the system.
An access control model with high security for distributed workflow and real-time application
NASA Astrophysics Data System (ADS)
Han, Ruo-Fei; Wang, Hou-Xiang
2007-11-01
The traditional mandatory access control policy (MAC) is regarded as a policy with strict regulation and poor flexibility. The security policy of MAC is so compelling that few information systems would adopt it at the cost of facility, except some particular cases with high security requirement as military or government application. However, with the increasing requirement for flexibility, even some access control systems in military application have switched to role-based access control (RBAC) which is well known as flexible. Though RBAC can meet the demands for flexibility but it is weak in dynamic authorization and consequently can not fit well in the workflow management systems. The task-role-based access control (T-RBAC) is then introduced to solve the problem. It combines both the advantages of RBAC and task-based access control (TBAC) which uses task to manage permissions dynamically. To satisfy the requirement of system which is distributed, well defined with workflow process and critically for time accuracy, this paper will analyze the spirit of MAC, introduce it into the improved T&RBAC model which is based on T-RBAC. At last, a conceptual task-role-based access control model with high security for distributed workflow and real-time application (A_T&RBAC) is built, and its performance is simply analyzed.
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
The Early History of Bioenergy
NASA Astrophysics Data System (ADS)
Radu, Popa
Energy is most commonly defined as the potential to do work. The maintenance of the living state requires a constant flow of energy through the system. The concept of energy is not easily implemented in computational models of life and is therefore often ignored in artificial life models. Some models even regard as irrelevant the energetic problematic (dissipation, irreversibility, couplings, energy currencies), in the physical realization of a biological system" (Ruiz-Mirazo et al. 1998). Examples of such models are Rosen's (M,R)-system, Varela's autopoietic models, Kauffman's autocatalytic set, and Fontana's algorithmic chemistry (see Appendix A). However, many origin-of-life theories maintain the primordial importance of energy for early life. Although everyone accepts that energetic constraints are important when describing material-based living systems, a problem arises when we have to consider whether or not they affect the very logic of the organization (Morán et al. 1999). It is argued here that energy considerations are not only primordial, but intimately related to the essence of life as well.
Rotating full- and reduced-dimensional quantum chemical models of molecules
NASA Astrophysics Data System (ADS)
Fábri, Csaba; Mátyus, Edit; Császár, Attila G.
2011-02-01
A flexible protocol, applicable to semirigid as well as floppy polyatomic systems, is developed for the variational solution of the rotational-vibrational Schrödinger equation. The kinetic energy operator is expressed in terms of curvilinear coordinates, describing the internal motion, and rotational coordinates, characterizing the orientation of the frame fixed to the nonrigid body. Although the analytic form of the kinetic energy operator might be very complex, it does not need to be known a priori within this scheme as it is constructed automatically and numerically whenever needed. The internal coordinates can be chosen to best represent the system of interest and the body-fixed frame is not restricted to an embedding defined with respect to a single reference geometry. The features of the technique mentioned make it especially well suited to treat large-amplitude nuclear motions. Reduced-dimensional rovibrational models can be defined straightforwardly by introducing constraints on the generalized coordinates. In order to demonstrate the flexibility of the protocol and the associated computer code, the inversion-tunneling of the ammonia (14NH3) molecule is studied using one, two, three, four, and six active vibrational degrees of freedom, within both vibrational and rovibrational variational computations. For example, the one-dimensional inversion-tunneling model of ammonia is considered also for nonzero rotational angular momenta. It turns out to be difficult to significantly improve upon this simple model. Rotational-vibrational energy levels are presented for rotational angular momentum quantum numbers J = 0, 1, 2, 3, and 4.
Model-Based Anomaly Detection for a Transparent Optical Transmission System
NASA Astrophysics Data System (ADS)
Bengtsson, Thomas; Salamon, Todd; Ho, Tin Kam; White, Christopher A.
In this chapter, we present an approach for anomaly detection at the physical layer of networks where detailed knowledge about the devices and their operations is available. The approach combines physics-based process models with observational data models to characterize the uncertainties and derive the alarm decision rules. We formulate and apply three different methods based on this approach for a well-defined problem in optical network monitoring that features many typical challenges for this methodology. Specifically, we address the problem of monitoring optically transparent transmission systems that use dynamically controlled Raman amplification systems. We use models of amplifier physics together with statistical estimation to derive alarm decision rules and use these rules to automatically discriminate between measurement errors, anomalous losses, and pump failures. Our approach has led to an efficient tool for systematically detecting anomalies in the system behavior of a deployed network, where pro-active measures to address such anomalies are key to preventing unnecessary disturbances to the system's continuous operation.
Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions
NASA Astrophysics Data System (ADS)
Oprisan, Sorinel Adrian; Oprisan, Ana
2005-03-01
Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells — EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.
Surface science and model catalysis with ionic liquid-modified materials.
Steinrück, H-P; Libuda, J; Wasserscheid, P; Cremer, T; Kolbeck, C; Laurin, M; Maier, F; Sobota, M; Schulz, P S; Stark, M
2011-06-17
Materials making use of thin ionic liquid (IL) films as support-modifying functional layer open up a variety of new possibilities in heterogeneous catalysis, which range from the tailoring of gas-surface interactions to the immobilization of molecularly defined reactive sites. The present report reviews recent progress towards an understanding of "supported ionic liquid phase (SILP)" and "solid catalysts with ionic liquid layer (SCILL)" materials at the microscopic level, using a surface science and model catalysis type of approach. Thin film IL systems can be prepared not only ex-situ, but also in-situ under ultrahigh vacuum (UHV) conditions using atomically well-defined surfaces as substrates, for example by physical vapor deposition (PVD). Due to their low vapor pressure, these systems can be studied in UHV using the full spectrum of surface science techniques. We discuss general strategies and considerations of this approach and exemplify the information available from complementary methods, specifically photoelectron spectroscopy and surface vibrational spectroscopy. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2017-01-01
A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. We would like it to match the well-defined algebra of spectral functions describing lights and surface reflectances, but an exact correspondence is impossible after the spectra have been projected to a three-dimensional color space, because of metamerism physically different spectra can produce the same color sensation. Metameric spectra are interchangeable for the purposes of addition, but not multiplication, so any color algebra is necessarily an approximation to physical reality. Nevertheless, because the majority of naturally-occurring spectra are well-behaved (e.g., continuous and slowly-varying), color algebras can be formulated that are largely accurate and agree well with human intuition. Here we explore the family of algebras that result from associating each color with a member of a three-dimensional manifold of spectra. This association can be used to construct a color product, defined as the color of the spectrum of the wavelength-wise product of the spectra associated with the two input colors. The choice of the spectral manifold determines the behavior of the resulting system, and certain special subspaces allow computational efficiencies. The resulting systems can be used to improve computer graphic rendering techniques, and to model various perceptual phenomena such as color constancy.
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
Inspiration from heart development: Biomimetic development of functional human cardiac organoids.
Richards, Dylan J; Coyle, Robert C; Tan, Yu; Jia, Jia; Wong, Kerri; Toomer, Katelynn; Menick, Donald R; Mei, Ying
2017-10-01
Recent progress in human organoids has provided 3D tissue systems to model human development, diseases, as well as develop cell delivery systems for regenerative therapies. While direct differentiation of human embryoid bodies holds great promise for cardiac organoid production, intramyocardial cell organization during heart development provides biological foundation to fabricate human cardiac organoids with defined cell types. Inspired by the intramyocardial organization events in coronary vasculogenesis, where a diverse, yet defined, mixture of cardiac cell types self-organizes into functional myocardium in the absence of blood flow, we have developed a defined method to produce scaffold-free human cardiac organoids that structurally and functionally resembled the lumenized vascular network in the developing myocardium, supported hiPSC-CM development and possessed fundamental cardiac tissue-level functions. In particular, this development-driven strategy offers a robust, tunable system to examine the contributions of individual cell types, matrix materials and additional factors for developmental insight, biomimetic matrix composition to advance biomaterial design, tissue/organ-level drug screening, and cell therapy for heart repair. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1991-01-01
The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.
Iommarini, Luisa; Peralta, Susana; Torraco, Alessandra; Diaz, Francisca
2015-01-01
Mitochondrial disorders are defined as defects that affect the oxidative phosphorylation system (OXPHOS). They are characterized by a heterogeneous array of clinical presentations due in part to a wide variety of factors required for proper function of the components of the OXPHOS system. There is no cure for these disorders owing our poor knowledge of the pathogenic mechanisms of disease. To understand the mechanisms of human disease numerous mouse models have been developed in recent years. Here we summarize the features of several mouse models of mitochondrial diseases directly related to those factors affecting mtDNA maintenance, replication, transcription, translation as well to other proteins that are involved in mitochondrial dynamics and quality control which affect mitochondrial OXPHOS function without been intrinsic components of the system. We discuss how these models have contributed to our understanding of mitochondrial diseases and their pathogenic mechanisms. PMID:25640959
Performability evaluation of the SIFT computer
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.
1979-01-01
Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.
Mathematical model for lift/cruise fan V/STOL aircraft simulator programming data
NASA Technical Reports Server (NTRS)
Bland, M. P.; Fajfar, B.; Konsewicz, R. K.
1976-01-01
Simulation data are reported for the purpose of programming the flight simulator for advanced aircraft for tests of the lift/cruise fan V/STOL Research Technology Aircraft. These simulation tests are to provide insight into problem areas which are encountered in operational use of the aircraft. A mathematical model is defined in sufficient detail to represent all the necessary pertinent aircraft and system characteristics. The model includes the capability to simulate two basic versions of an aircraft propulsion system: (1) the gas coupled configuration which uses insulated air ducts to transmit power between gas generators and fans in the form of high energy engine exhaust and (2) the mechanically coupled power system which uses shafts, clutches, and gearboxes for power transmittal. Both configurations are modeled such that the simulation can include vertical as well as rolling takeoff and landing, hover, powered lift flight, aerodynamic flight, and the transition between powered lift and aerodynamic flight.
Software Systems for High-performance Quantum Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Britt, Keith A
Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less
General Training System; GENTRAS. Final Report.
ERIC Educational Resources Information Center
International Business Machines Corp., Gaithersburg, MD. Federal Systems Div.
GENTRAS (General Training System) is a computer-based training model for the Marine Corps which makes use of a systems approach. The model defines the skill levels applicable for career growth and classifies and defines the training needed for this growth. It also provides a training cost subsystem which will provide a more efficient means of…
Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle
NASA Technical Reports Server (NTRS)
Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat
1993-01-01
The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.
Seiniger, Patrick; Bartels, Oliver; Pastor, Claus; Wisch, Marcus
2013-01-01
It is commonly agreed that active safety will have a significant impact on reducing accident figures for pedestrians and probably also bicyclists. However, chances and limitations for active safety systems have only been derived based on accident data and the current state of the art, based on proprietary simulation models. The objective of this article is to investigate these chances and limitations by developing an open simulation model. This article introduces a simulation model, incorporating accident kinematics, driving dynamics, driver reaction times, pedestrian dynamics, performance parameters of different autonomous emergency braking (AEB) generations, as well as legal and logical limitations. The level of detail for available pedestrian accident data is limited. Relevant variables, especially timing of the pedestrian appearance and the pedestrian's moving speed, are estimated using assumptions. The model in this article uses the fact that a pedestrian and a vehicle in an accident must have been in the same spot at the same time and defines the impact position as a relevant accident parameter, which is usually available from accident data. The calculations done within the model identify the possible timing available for braking by an AEB system as well as the possible speed reduction for different accident scenarios as well as for different system configurations. The simulation model identifies the lateral impact position of the pedestrian as a significant parameter for system performance, and the system layout is designed to brake when the accident becomes unavoidable by the vehicle driver. Scenarios with a pedestrian running from behind an obstruction are the most demanding scenarios and will very likely never be avoidable for all vehicle speeds due to physical limits. Scenarios with an unobstructed person walking will very likely be treatable for a wide speed range for next generation AEB systems.
A Model-Driven, Science Data Product Registration Service
NASA Astrophysics Data System (ADS)
Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.
2011-12-01
The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various artifacts above as well as offering the flexibility to support customer-defined artifacts. Key features for the Registry Service include: - Model-based configuration specifying customer-defined artifact types, metadata attributes to capture for each artifact type, supported associations and classification schemes. - A REST-based external interface that is accessible via the Hypertext Transfer Protocol (HTTP). - Federation of Registry Service instances allowing associations between registered artifacts across registries as well as queries for artifacts across those same registries. A federation also enables features such as replication and synchronization if desired for a given deployment. In addition to its use as a core component of the PDS, the generic implementation of the Registry Service facilitates its applicability as a core component in any science data archive or science data system.
SimCheck: An Expressive Type System for Simulink
NASA Technical Reports Server (NTRS)
Roy, Pritam; Shankar, Natarajan
2010-01-01
MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.
3D thermography for improving temperature measurements in thermal vacuum testing
NASA Astrophysics Data System (ADS)
Robinson, D. W.; Simpson, R.; Parian, J. A.; Cozzani, A.; Casarosa, G.; Sablerolle, S.; Ertel, H.
2017-09-01
The application of thermography to thermal vacuum (TV) testing of spacecrafts is becoming a vital additional tool in the mapping of structures during thermal cycles and thermal balance (TB) testing. Many of the customers at the European Space Agency (ESA) test centre, European Space Research and Technology Centre (ESTEC), The Netherlands, now make use of a thermal camera during TB-TV campaigns. This complements the use of embedded thermocouples on the structure, providing the prospect of monitoring temperatures at high resolution and high frequency. For simple flat structures with a well-defined emissivity, it is possible to determine the surface temperatures with reasonable confidence. However, for most real spacecraft and sub-systems, the complexity of the structure's shape and its test environment creates inter-reflections from external structures. This and the additional complication of angular and spectral variations of the spacecraft surface emissivity make the interpretation of the radiation detected by a thermal camera more difficult in terms of determining a validated temperature with high confidence and well-defined uncertainty. One solution to this problem is: to map the geometry of the test specimen and thermal test environment; to model the surface temperatures and emissivity variations of the structures and materials; and to use this model to correct the apparent temperatures recorded by the thermal camera. This approach has been used by a team from NPL (National Physical Laboratory), Psi-tran, and PhotoCore, working with ESA, to develop a 3D thermography system to provide a means to validate thermal camera temperatures, based on a combination of thermal imaging photogrammetry and ray-tracing scene modeling. The system has been tested at ESTEC in ambient conditions with a dummy spacecraft structure containing a representative set of surface temperatures, shapes, and spacecraft materials, and with hot external sources and a high power lamp as a sun simulator. The results are presented here with estimated temperature measurement uncertainties and defined confidence levels according to the internationally accepted Guide to Uncertainty of Measurement as used in the IEC/ISO17025 test and measurement standard. This work is understood to represent the first application of well-understood thermal imaging theory, commercial photogrammetry software, and open-source ray-tracing software (adapted to realize the Planck function for thermal wavebands and target emission), and to produce from these elements a complete system for determining true surface temperatures for complex spacecraft-testing applications.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
A thermal control approach for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Maloy, J. E.; Oglebay, J. C.
1979-01-01
A thrust subsystem thermal control design is defined for a Solar Electric Propulsion System (SEPS) proposed for the comet Halley Flyby/comet Tempel 2 rendezvous mission. A 114 node analytic model, developed and coded on the systems improved numerical differencing analyzer program, was employed. A description of the resulting thrust subsystem thermal design is presented as well as a description of the analytic model and comparisons of the predicted temperature profiles for various SEPS thermal configurations that were generated using this model. It was concluded that: (1) a BIMOD engine system thermal design can be autonomous; (2) an independent thrust subsystem thermal design is feasible; (3) the interface module electronics temperatures can be controlled by a passive radiator and supplementary heaters; (4) maintaining heat pipes above the freezing point would require an additional 322 watts of supplementary heating power for the situation where no thrusters are operating; (5) insulation is required around the power processors, and between the interface module and the avionics module, as well as in those areas which may be subjected to solar heating; and (6) insulation behind the heat pipe radiators is not necessary.
NASA Astrophysics Data System (ADS)
Vasquez, D. A.; Swift, J. N.; Tan, S.; Darrah, T. H.
2013-12-01
The integration of precise geochemical analyses with quantitative engineering modeling into an interactive GIS system allows for a sophisticated and efficient method of reservoir engineering and characterization. Geographic Information Systems (GIS) is utilized as an advanced technique for oil field reservoir analysis by combining field engineering and geological/geochemical spatial datasets with the available systematic modeling and mapping methods to integrate the information into a spatially correlated first-hand approach in defining surface and subsurface characteristics. Three key methods of analysis include: 1) Geostatistical modeling to create a static and volumetric 3-dimensional representation of the geological body, 2) Numerical modeling to develop a dynamic and interactive 2-dimensional model of fluid flow across the reservoir and 3) Noble gas geochemistry to further define the physical conditions, components and history of the geologic system. Results thus far include using engineering algorithms for interpolating electrical well log properties across the field (spontaneous potential, resistivity) yielding a highly accurate and high-resolution 3D model of rock properties. Results so far also include using numerical finite difference methods (crank-nicholson) to solve for equations describing the distribution of pressure across field yielding a 2D simulation model of fluid flow across reservoir. Ongoing noble gas geochemistry results will also include determination of the source, thermal maturity and the extent/style of fluid migration (connectivity, continuity and directionality). Future work will include developing an inverse engineering algorithm to model for permeability, porosity and water saturation.This combination of new and efficient technological and analytical capabilities is geared to provide a better understanding of the field geology and hydrocarbon dynamics system with applications to determine the presence of hydrocarbon pay zones (or other reserves) and improve oil field management (e.g. perforating, drilling, EOR and reserves estimation)
Applying AI systems in the T and D arena. [Artificial Intelligence, Transmission and Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, S.S.; Liu, Chenching; Sumic, Z.
1993-04-01
The power engineering community has capitalized on various computer technologies since the early 1960s, with most successful application to solving well-defined problems that are capable of being modeled. Although computing methods have made notable progress in the power engineering arena, there is still a class of problems that is not easy to define or formulate to apply conventional computerized methods. In addition to being difficult to express in a closed mathematical form, these problems are often characterized by the absence of one or both of the following features: a predetermined decision path from the initial state to goal (ill-structured problem);more » a well-defined criteria for whether an obtained solution is acceptable (open-ended problem). Power engineers have been investigating the application of AI-based methodologies to power system problems. Most of the work in the past has been geared towards the development of expert systems as an operator's aid in energy control centers for bulk power transmission systems operating under abnormal conditions. Alarm processing, fault diagnosis, system restoration, and voltage/var control are a few key areas where significant research work has progressed to date. Results of this research have effected more than 100 prototype expert systems for power systems throughout the US, Japan, and Europe. The objectives of this article are to: expose engineers to the benefits of using AI methods for a host of transmission and distribution (T and D) problems that need immediate attention; identify problems that could be solved more effectively by applying AI approaches; summarize recent developments and successful AI applications in T and D.« less
APGEN Scheduling: 15 Years of Experience in Planning Automation
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; Wissler, Steve; Lenda, Matthew; Finnerty, Daniel
2014-01-01
In this paper, we discuss the scheduling capability of APGEN (Activity Plan Generator), a multi-mission planning application that is part of the NASA AMMOS (Advanced Multi- Mission Operations System), and how APGEN scheduling evolved over its applications to specific Space Missions. Our analysis identifies two major reasons for the successful application of APGEN scheduling to real problems: an expressive DSL (Domain-Specific Language) for formulating scheduling algorithms, and a well-defined process for enlisting the help of auxiliary modeling tools in providing high-fidelity, system-level simulations of the combined spacecraft and ground support system.
Space station advanced automation
NASA Technical Reports Server (NTRS)
Woods, Donald
1990-01-01
In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.
Identification of the focal plane wavefront control system using E-M algorithm
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert
2017-09-01
In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.
All-atom ensemble modeling to analyze small angle X-ray scattering of glycosylated proteins
Guttman, Miklos; Weinkam, Patrick; Sali, Andrej; Lee, Kelly K.
2013-01-01
Summary The flexible and heterogeneous nature of carbohydrate chains often renders glycoproteins refractory to traditional structure determination methods. Small Angle X-ray scattering (SAXS) can be a useful tool for obtaining structural information of these systems. All-atom modeling of glycoproteins with flexible glycan chains was applied to interpret the solution SAXS data for a set of glycoproteins. For simpler systems (single glycan, with a well defined protein structure), all-atom modeling generates models in excellent agreement with the scattering pattern, and reveals the approximate spatial occupancy of the glycan chain in solution. For more complex systems (several glycan chains, or unknown protein substructure), the approach can still provide insightful models, though the orientations of glycans become poorly determined. Ab initio shape reconstructions appear to capture the global morphology of glycoproteins, but in most cases offer little information about glycan spatial occupancy. The all-atom modeling methodology is available as a webserver at http://modbase.compbio.ucsf.edu/allosmod-foxs. PMID:23473666
Outer satellite atmospheres: Their nature and planetary interactions
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Combi, M. R.
1982-01-01
Significant progress is reported in early modeling analysis of observed sodium cloud images with our new model which includes the oscillating Io plasma torus ionization sink. Both the general w-D morphology of the region B cloud as well as the large spatial gradient seen between the region A and B clouds are found to be consistent with an isotropic flux of sodium atoms from Io. Model analysis of the spatially extended high velocity directional features provided substantial evidence for a magnetospheric wind driven gas escape mechanism from Io. In our efforts to define the source(s) of hydrogen atoms in the Saturn system, major steps were taken in order to understand the role of Titan. We have completed the comparison of the Voyager UVS data with previous Titan model results, as well as the update of the old model computer code to handle the spatially varying ionization sink for H atoms.
NASA Astrophysics Data System (ADS)
Rogers, Justin S.; Monismith, Stephen G.; Fringer, Oliver B.; Koweek, David A.; Dunbar, Robert B.
2017-02-01
We present a hydrodynamic analysis of an atoll system from modeling simulations using a coupled wave and three-dimensional hydrodynamic model (COAWST) applied to Palmyra Atoll in the Central Pacific. This is the first time the vortex force formalism has been applied in a highly frictional reef environment. The model results agree well with field observations considering the model complexity in terms of bathymetry, bottom roughness, and forcing (waves, wind, metrological, tides, regional boundary conditions), and open boundary conditions. At the atoll scale, strong regional flows create flow separation and a well-defined wake, similar to 2D flow past a cylinder. Circulation within the atoll is typically forced by waves and tides, with strong waves from the north driving flow from north to south across the atoll, and from east to west through the lagoon system. Bottom stress is significant for depths less than about 60 m, and in addition to the model bathymetry, is important for correct representation of flow in the model. Connectivity within the atoll system shows that the general trends follow the mean flow paths. However, some connectivity exists between all regions of the atoll system due to nonlinear processes such as eddies and tidal phasing. Moderate wave stress, short travel time (days since entering the reef system), and low temperature appear to be the most ideal conditions for high coral cover at this site.
NASA Astrophysics Data System (ADS)
Rogers, J.; Monismith, S. G.; Fringer, O. B.; Koweek, D.; Dunbar, R. B.
2016-12-01
We present a hydrodynamic analysis of an atoll system from modeling simulations using a coupled wave and three-dimensional hydrodynamic model (COAWST) applied to Palmyra Atoll in the Central Pacific. This is the first time the vortex force formalism has been applied in a highly frictional reef environment. The model results agree well with field observations considering the model complexity in terms of bathymetry, bottom roughness, and forcing (waves, wind, metrological, tides, regional boundary conditions), and open boundary conditions. At the atoll scale, strong regional flows create flow separation and a well-defined wake, similar to 2D flow past a cylinder. Circulation within the atoll is typically forced by waves and tides, with strong waves from the north driving flow from north to south across the atoll, and from east to west through the lagoon system. Bottom stress is significant for depths less than about 60 m, and in addition to the model bathymetry, is important for correct representation of flow in the model. Connectivity within the atoll system shows that the general trends follow the mean flow paths. However, some connectivity exists between all regions of the atoll system due to nonlinear processes such as eddies and tidal phasing. While high mean flow and travel time less than 20 hours appears to differentiate very productive coral regions, low temperature and moderate wave stress appear to be the most ideal conditions for high coral cover on Palmyra.
ERIC Educational Resources Information Center
Grimes, Matthew T.; Harley, Carolyn W.; Darby-King, Andrea; McLean, John H.
2012-01-01
Neonatal odor-preference memory in rat pups is a well-defined associative mammalian memory model dependent on cAMP. Previous work from this laboratory demonstrates three phases of neonatal odor-preference memory: short-term (translation-independent), intermediate-term (translation-dependent), and long-term (transcription- and…
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
Steady-state kinetic modeling constrains cellular resting states and dynamic behavior.
Purvis, Jeremy E; Radhakrishnan, Ravi; Diamond, Scott L
2009-03-01
A defining characteristic of living cells is the ability to respond dynamically to external stimuli while maintaining homeostasis under resting conditions. Capturing both of these features in a single kinetic model is difficult because the model must be able to reproduce both behaviors using the same set of molecular components. Here, we show how combining small, well-defined steady-state networks provides an efficient means of constructing large-scale kinetic models that exhibit realistic resting and dynamic behaviors. By requiring each kinetic module to be homeostatic (at steady state under resting conditions), the method proceeds by (i) computing steady-state solutions to a system of ordinary differential equations for each module, (ii) applying principal component analysis to each set of solutions to capture the steady-state solution space of each module network, and (iii) combining optimal search directions from all modules to form a global steady-state space that is searched for accurate simulation of the time-dependent behavior of the whole system upon perturbation. Importantly, this stepwise approach retains the nonlinear rate expressions that govern each reaction in the system and enforces constraints on the range of allowable concentration states for the full-scale model. These constraints not only reduce the computational cost of fitting experimental time-series data but can also provide insight into limitations on system concentrations and architecture. To demonstrate application of the method, we show how small kinetic perturbations in a modular model of platelet P2Y(1) signaling can cause widespread compensatory effects on cellular resting states.
Controlling measurement-induced nonlocality in the Heisenberg XX model by three-spin interactions
NASA Astrophysics Data System (ADS)
Xie, Yu-Xia; Sun, Yu-Hang; Li, Zhao
2018-01-01
We investigate the well-defined measures of measurement-induced nonlocality (MIN) for thermal states of the transverse field XX model, with the addition of three-spin interaction terms being introduced. The results showed that the MINs are very sensitive to system parameters of the chain. The three-spin interactions can serve as flexible parameters for enhancing MINs of the boundary spins, and the maximum enhancement achievable by varying strengths of the three-spin interactions are different for the chain with different number of spins.
Jafri, Salema; Ormiston, Mark L
2017-12-01
Systemic hypertension, preeclampsia, and pulmonary arterial hypertension (PAH) are diseases of high blood pressure in the systemic or pulmonary circulation. Beyond the well-defined contribution of more traditional pathophysiological mechanisms, such as changes in the renin-angiotensin-aldosterone system, to the development of these hypertensive disorders, there is substantial clinical evidence supporting an important role for inflammation and immunity in the pathogenesis of each of these three conditions. Over the last decade, work in small animal models, bearing targeted deficiencies in specific cytokines or immune cell subsets, has begun to clarify the immune-mediated mechanisms that drive changes in vascular structure and tone in hypertensive disease. By summarizing the clinical and experimental evidence supporting a contribution of the immune system to systemic hypertension, preeclampsia, and PAH, the current review highlights the cellular and molecular pathways that are common to all three hypertensive disorders. These mechanisms are centered on an imbalance in CD4 + helper T cell populations, defined by excessive Th17 responses and impaired T reg activity, as well as the excessive activation or impairment of additional immune cell types, including macrophages, dendritic cells, CD8 + T cells, B cells, and natural killer cells. The identification of common immune mechanisms in systemic hypertension, preeclampsia, and PAH raises the possibility of new therapeutic strategies that target the immune component of hypertension across multiple disorders. Copyright © 2017 the American Physiological Society.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
Rieder, Florian; Kessler, Sean; Sans, Miquel
2012-01-01
Fibrosis is a serious condition complicating chronic inflammatory processes affecting the intestinal tract. Advances in this field that rely on human studies have been slow and seriously restricted by practical and logistic reasons. As a consequence, well-characterized animal models of intestinal fibrosis have emerged as logical and essential systems to better define and understand the pathophysiology of fibrosis. In point of fact, animal models allow the execution of mechanistic studies as well as the implementation of clinical trials with novel, pathophysiology-based therapeutic approaches. This review provides an overview of the currently available animal models of intestinal fibrosis, taking into consideration the methods of induction, key characteristics of each model, and underlying mechanisms. Currently available models will be classified into seven categories: spontaneous, gene-targeted, chemical-, immune-, bacteria-, and radiation-induced as well as postoperative fibrosis. Each model will be discussed in regard to its potential to create research opportunities to gain insights into the mechanisms of intestinal fibrosis and stricture formation and assist in the development of effective and specific antifibrotic therapies. PMID:22878121
Future Visions of the Brahmaputra - Establishing Hydrologic Baseline and Water Resources Context
NASA Astrophysics Data System (ADS)
Ray, P. A.; Yang, Y. E.; Wi, S.; Brown, C. M.
2013-12-01
The Brahmaputra River Basin (China-India-Bhutan-Bangladesh) is on the verge of a transition from a largely free flowing and highly variable river to a basin of rapid investment and infrastructure development. This work demonstrates a knowledge platform for the basin that compiles available data, and develops hydrologic and water resources system models of the basin. A Variable Infiltration Capacity (VIC) model of the Brahmaputra basin supplies hydrologic information of major tributaries to a water resources system model, which routes runoff generated via the VIC model through water infrastructure, and accounts for water withdrawals for agriculture, hydropower generation, municipal demand, return flows and others human activities. The system model also simulates agricultural production and the economic value of water in its various uses, including municipal, agricultural, and hydropower. Furthermore, the modeling framework incorporates plausible climate change scenarios based on the latest projections of changes to contributing glaciers (upstream), as well as changes to monsoon behavior (downstream). Water resources projects proposed in the Brahmaputra basin are evaluated based on their distribution of benefits and costs in the absence of well-defined water entitlements, and relative to a complex regional water-energy-food nexus. Results of this project will provide a basis for water sharing negotiation among the four countries and inform trans-national water-energy policy making.
Clinical Diabetes Centers of Excellence: A Model for Future Adult Diabetes Care.
Draznin, Boris; Kahn, Peter A; Wagner, Nicole; Hirsch, Irl B; Korytkowski, Mary; Harlan, David M; McDonnell, Marie E; Gabbay, Robert A
2018-03-01
Although diabetes research centers are well defined by National Institutes of Health, there is no clear definition for clinical Diabetes Centers of Excellence (DCOEs). There are multiple clinical diabetes centers across the United States, some established with philanthropic funding; however, it is not clear what defines a DCOE from a clinical perspective and what the future will be for these centers. In this Perspective we propose a framework to guide advancement for DCOEs. With the shift toward value-based purchasing and reimbursement and away from fee for service, defining the procedures for broader implementation of DCOEs as a way to improve population health and patient care experience (including quality and satisfaction) and reduce health care costs becomes critically important. It is prudent to implement new financial systems for compensating diabetes care that may not be provided by fiscally constrained private and academic medical centers. We envision that future clinical DCOEs would be composed of a well-defined infrastructure and six domains or pillars serving as the general guiding principles for developing expertise in diabetes care that can be readily demonstrated to stakeholders, including health care providers, patients, payers, and government agencies.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Methodolgy For Evaluation Of Technology Impacts In Space Electric Power Systems
NASA Technical Reports Server (NTRS)
Holda, Julie
2004-01-01
The Analysis and Management branch of the Power and Propulsion Office at NASA Glenn Research Center is responsible for performing complex analyses of the space power and In-Space propulsion products developed by GRC. This work quantifies the benefits of the advanced technologies to support on-going advocacy efforts. The Power and Propulsion Office is committed to understanding how the advancement in space technologies could benefit future NASA missions. They support many diverse projects and missions throughout NASA as well as industry and academia. The area of work that we are concentrating on is space technology investment strategies. Our goal is to develop a Monte-Carlo based tool to investigate technology impacts in space electric power systems. The framework is being developed at this stage, which will be used to set up a computer simulation of a space electric power system (EPS). The outcome is expected to be a probabilistic assessment of critical technologies and potential development issues. We are developing methods for integrating existing spreadsheet-based tools into the simulation tool. Also, work is being done on defining interface protocols to enable rapid integration of future tools. Monte Carlo-based simulation programs for statistical modeling of the EPS Model. I decided to learn and evaluate Palisade's @Risk and Risk Optimizer software, and utilize it's capabilities for the Electric Power System (EPS) model. I also looked at similar software packages (JMP, SPSS, Crystal Ball, VenSim, Analytica) available from other suppliers and evaluated them. The second task was to develop the framework for the tool, in which we had to define technology characteristics using weighing factors and probability distributions. Also we had to define the simulation space and add hard and soft constraints to the model. The third task is to incorporate (preliminary) cost factors into the model. A final task is developing a cross-platform solution of this framework.
NASA Astrophysics Data System (ADS)
Khan, Yaser; Brumer, Paul
2012-11-01
A Hamiltonian based approach using spatially localized projection operators is introduced to give precise meaning to the chemically intuitive idea of the electronic energy on a quantum subsystem. This definition facilitates the study of electronic energy transfer in arbitrarily coupled quantum systems. In particular, the decomposition scheme can be applied to molecular components that are strongly interacting (with significant orbital overlap) as well as to isolated fragments. The result defines a consistent electronic energy at all internuclear distances, including the case of separated fragments, and reduces to the well-known Förster and Dexter results in their respective limits. Numerical calculations of coherent energy and charge transfer dynamics in simple model systems are presented and the effect of collisionally induced decoherence is examined.
Modelling and simulating a crisis management system: an organisational perspective
NASA Astrophysics Data System (ADS)
Chaawa, Mohamed; Thabet, Inès; Hanachi, Chihab; Ben Said, Lamjed
2017-04-01
Crises are complex situations due to the dynamism of the environment, its unpredictability and the complexity of the interactions among several different and autonomous involved organisations. In such a context, establishing an organisational view as well as structuring organisations' communications and their functioning is a crucial requirement. In this article, we propose a multi-agent organisational model (OM) to abstract, simulate and analyse a crisis management system (CMS). The objective is to evaluate the CMS from an organisational view, to assess its strength as well as its weakness and to provide deciders with some recommendations for a more flexible and reactive CMS. The proposed OM is illustrated through a real case study: a snowstorm in a Tunisian region. More precisely, we made the following contribution: firstly, we provide an environmental model that identifies the concepts involved in the crisis. Then, we define a role model that copes with the involved actors. In addition, we specify the organisational structure and the interaction model that rule communications and structure actors' functioning. Those models, built following the GAIA methodology, abstract the CMS from an organisational perspective. Finally, we implemented a customisable multi-agent simulator based on the Janus platform to analyse, through several performed simulations, the organisational model.
Li, Jianyou; Tanaka, Hiroya
2018-01-01
Traditional splinting processes are skill dependent and irreversible, and patient satisfaction levels during rehabilitation are invariably lowered by the heavy structure and poor ventilation of splints. To overcome this drawback, use of the 3D-printing technology has been proposed in recent years, and there has been an increase in public awareness. However, application of 3D-printing technologies is limited by the low CAD proficiency of clinicians as well as unforeseen scan flaws within anatomic models.A programmable modeling tool has been employed to develop a semi-automatic design system for generating a printable splint model. The modeling process was divided into five stages, and detailed steps involved in construction of the proposed system as well as automatic thickness calculation, the lattice structure, and assembly method have been thoroughly described. The proposed approach allows clinicians to verify the state of the splint model at every stage, thereby facilitating adjustment of input content and/or other parameters to help solve possible modeling issues. A finite element analysis simulation was performed to evaluate the structural strength of generated models. A fit investigation was applied on fabricated splints and volunteers to assess the wearing experience. Manual modeling steps involved in complex splint designs have been programed into the proposed automatic system. Clinicians define the splinting region by drawing two curves, thereby obtaining the final model within minutes. The proposed system is capable of automatically patching up minor flaws within the limb model as well as calculating the thickness and lattice density of various splints. Large splints could be divided into three parts for simultaneous multiple printing. This study highlights the advantages, limitations, and possible strategies concerning application of programmable modeling tools in clinical processes, thereby aiding clinicians with lower CAD proficiencies to become adept with splint design process, thus improving the overall design efficiency of 3D-printed splints.
Two-tier tissue decomposition for histopathological image representation and classification.
Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem
2015-01-01
In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.
Assessing the performance of regional landslide early warning models: the EDuMaP method
NASA Astrophysics Data System (ADS)
Calvello, M.; Piciullo, L.
2016-01-01
A schematic of the components of regional early warning systems for rainfall-induced landslides is herein proposed, based on a clear distinction between warning models and warning systems. According to this framework an early warning system comprises a warning model as well as a monitoring and warning strategy, a communication strategy and an emergency plan. The paper proposes the evaluation of regional landslide warning models by means of an original approach, called the "event, duration matrix, performance" (EDuMaP) method, comprising three successive steps: identification and analysis of the events, i.e., landslide events and warning events derived from available landslides and warnings databases; definition and computation of a duration matrix, whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model performance by means of performance criteria and indicators applied to the duration matrix. During the first step the analyst identifies and classifies the landslide and warning events, according to their spatial and temporal characteristics, by means of a number of model parameters. In the second step, the analyst computes a time-based duration matrix with a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warning data from the municipal early warning system operating in Rio de Janeiro (Brazil).
Virtual terrain: a security-based representation of a computer network
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Argauer, Brian
2008-03-01
Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, J.M.; Arnett, R.C.; Neupauer, R.M.
This report documents a study conducted to develop a regional groundwater flow model for the Eastern Snake River Plain Aquifer in the area of the Idaho National Engineering Laboratory. The model was developed to support Waste Area Group 10, Operable Unit 10-04 groundwater flow and transport studies. The products of this study are this report and a set of computational tools designed to numerically model the regional groundwater flow in the Eastern Snake River Plain aquifer. The objective of developing the current model was to create a tool for defining the regional groundwater flow at the INEL. The model wasmore » developed to (a) support future transport modeling for WAG 10-04 by providing the regional groundwater flow information needed for the WAG 10-04 risk assessment, (b) define the regional groundwater flow setting for modeling groundwater contaminant transport at the scale of the individual WAGs, (c) provide a tool for improving the understanding of the groundwater flow system below the INEL, and (d) consolidate the existing regional groundwater modeling information into one usable model. The current model is appropriate for defining the regional flow setting for flow submodels as well as hypothesis testing to better understand the regional groundwater flow in the area of the INEL. The scale of the submodels must be chosen based on accuracy required for the study.« less
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
An examination of astrophysical habitats for targeted SETI
NASA Technical Reports Server (NTRS)
Doyle, Laurance R.; Mckay, Christopher P.; Reynolds, Ray T.; Whitmire, Daniel P.; Matese, John J.
1991-01-01
Planetary atmospheric radiative transfer models have recently given valuable insights into the definition of the solar system's ecoshell. In addition, however, results have indicated that constraints on solar evolution also need to be addressed, with even minor solar variations, (mass loss, for example), having important consequences from an exobiological standpoint. Following the definition of the solar system's ecoshell evolution, the ecoshells around different stellar spectral types can then be modeled. In this study the astrophysical constraints on the definition of ecoshells and possible exobiological habitats includes: (1) the investigation of the evolution of the solar system's ecoshell under different initial solar/stellar model conditions as indicated by both solar abundance considerations as well as planetary evidence; (2) an outline of considerations necessary to define the ecoshells around the most abundant spectral-type stars, the K and M stars looking at the effects on exobiological habitats of planetary rotational tidal locking effects, and stellar flare/chromospheric-activity cycles, among other effects; (3) a preliminary examination of the factors defining the expected ecoshells around binary stars determining the of regular stellar eclipses, and the expected shortening of the semi-major axis. These results can then be applied to the targeted microwave search for extraterrestrial intelligent signals by constraining the ecoshell space in the solar neighborhood.
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Liu, Lie; Liu, S. T.; Yuan, Fuh-Gwo; Beard, Shawn
2011-04-01
Materials State Awareness (MSA) goes beyond traditional NDE and SHM in its challenge to characterize the current state of material damage before the onset of macro-damage such as cracks. A highly reliable, minimally invasive system for MSA of Aerospace Structures, Naval structures as well as next generation space systems is critically needed. Development of such a system will require a reliable SHM system that can detect the onset of damage well before the flaw grows to a critical size. Therefore, it is important to develop an integrated SHM system that not only detects macroscale damages in the structures but also provides an early indication of flaw precursors and microdamages. The early warning for flaw precursors and their evolution provided by an SHM system can then be used to define remedial strategies before the structural damage leads to failure, and significantly improve the safety and reliability of the structures. Thus, in this article a preliminary concept of developing the Hybrid Distributed Sensor Network Integrated with Self-learning Symbiotic Diagnostic Algorithms and Models to accurately and reliably detect the precursors to damages that occur to the structure are discussed. Experiments conducted in a laboratory environment shows potential of the proposed technique.
Perrin, James M; Romm, Diane; Bloom, Sheila R; Homer, Charles J; Kuhlthau, Karen A; Cooley, Carl; Duncan, Paula; Roberts, Richard; Sloyer, Phyllis; Wells, Nora; Newacheck, Paul
2007-10-01
To present a conceptual definition of a family-centered system of services for children and youth with special health care needs (CYSHCN). Previous work by the Maternal and Child Health Bureau to define CYSHCN has had widespread program effects. This article similarly seeks to provide a definition of a system of services. Comprehensive literature review of systems of services and consensus panel organized to review and refine the definition. Policy research group and advisors at multiple sites. Policy researchers, content experts on CYSHCN, family representatives, and state program directors. Definition of a system of services for CYSHCN. This article defines a system of services for CYSHCN as a family-centered network of community-based services designed to promote the healthy development and well-being of these children and their families. The definition can guide discussion among policy makers, practitioners, state programs, researchers, and families for implementing the "community-based systems of services" contained in Title V of the Social Security Act. Critical characteristics of a system include coordination of child and family services, effective communication among providers and the family, family partnership in care provision, and flexibility. This definition provides a conceptual model that can help measurement development and assessment of how well systems work and achieve their goals. Currently available performance objectives for the provision of care for CYSHCN and national surveys of child health could be modified to assess systems of services in general.
Ecposure Related Dose Estimating Model
ERDEM is a physiologically based pharmacokinetic (PBPK) modeling system consisting of a general model and an associated front end. An actual model is defined when the user prepares an input command file. Such a command file defines the chemicals, compartments and processes that...
NASA Astrophysics Data System (ADS)
Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.
2017-12-01
Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of state variables.
Culture & Cognition Laboratory
2011-05-01
life: Real world social-interaction cooperative tasks are inherently unequal in difficulty. Re-scoring performance on unequal tasks in order to enable...real- world situations to which this model is intended to apply, it is possible for calls for help to not be heard, or for a potential help-provider to...not have clear, well-defined objectives. Since many complex real- worlds tasks are not well-defined, defining a realistic objective can be considered a
Computer-aided resource planning and scheduling for radiological services
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.
1996-05-01
There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.
Primordial Evolution in the Finitary Process Soup
NASA Astrophysics Data System (ADS)
Görnerup, Olof; Crutchfield, James P.
A general and basic model of primordial evolution—a soup of reacting finitary and discrete processes—is employed to identify and analyze fundamental mechanisms that generate and maintain complex structures in prebiotic systems. The processes—ɛ-machines as defined in computational mechanics—and their interaction networks both provide well defined notions of structure. This enables us to quantitatively demonstrate hierarchical self-organization in the soup in terms of complexity. We found that replicating processes evolve the strategy of successively building higher levels of organization by autocatalysis. Moreover, this is facilitated by local components that have low structural complexity, but high generality. In effect, the finitary process soup spontaneously evolves a selection pressure that favors such components. In light of the finitary process soup's generality, these results suggest a fundamental law of hierarchical systems: global complexity requires local simplicity.
Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary
2011-01-01
Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201
Tabatabaie, Seyed Mohammad Hossein; Bolte, John P; Murthy, Ganti S
2018-06-01
The goal of this study was to integrate a crop model, DNDC (DeNitrification-DeComposition), with life cycle assessment (LCA) and economic analysis models using a GIS-based integrated platform, ENVISION. The integrated model enables LCA practitioners to conduct integrated economic analysis and LCA on a regional scale while capturing the variability of soil emissions due to variation in regional factors during production of crops and biofuel feedstocks. In order to evaluate the integrated model, the corn-soybean cropping system in Eagle Creek Watershed, Indiana was studied and the integrated model was used to first model the soil emissions and then conduct the LCA as well as economic analysis. The results showed that the variation in soil emissions due to variation in weather is high causing some locations to be carbon sink in some years and source of CO 2 in other years. In order to test the model under different scenarios, two tillage scenarios were defined: 1) conventional tillage (CT) and 2) no tillage (NT) and analyzed with the model. The overall GHG emissions for the corn-soybean cropping system was simulated and results showed that the NT scenario resulted in lower soil GHG emissions compared to CT scenario. Moreover, global warming potential (GWP) of corn ethanol from well to pump varied between 57 and 92gCO 2 -eq./MJ while GWP under the NT system was lower than that of the CT system. The cost break-even point was calculated as $3612.5/ha in a two year corn-soybean cropping system and the results showed that under low and medium prices for corn and soybean most of the farms did not meet the break-even point. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S.C.
1993-08-01
This report discusses a field demonstration of a methodology for characterizing an aquifer's geohydrology in the detail required to design an optimum network of wells and/or infiltration galleries for bioreclamation systems. The project work was conducted on a 1-hectare test site at Columbus AFB, Mississippi. The technical report is divided into two volumes. Volume I describes the test site and the well network, the assumptions, and the application of equations that define groundwater flow to a well, the results of three large-scale aquifer tests, and the results of 160 single-pump tests. Volume II describes the bore hole flowmeter tests, themore » tracer tests, the geological investigations, the geostatistical analysis and the guidelines for using groundwater models to design bioreclamation systems. Site characterization, Hydraulic conductivity, Groundwater flow, Geostatistics, Geohydrology, Monitoring wells.« less
A Source-Term Based Boundary Layer Bleed/Effusion Model for Passive Shock Control
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Norris, Andrew T.
2011-01-01
A modeling framework for boundary layer effusion has been developed based on the use of source (or sink) terms instead of the usual practice of specifying bleed directly as a boundary condition. This framework allows the surface boundary condition (i.e. isothermal wall, adiabatic wall, slip wall, etc.) to remain unaltered in the presence of bleed. This approach also lends itself to easily permit the addition of empirical models for second order effects that are not easily accounted for by simply defining effective transpiration values. Two effusion models formulated for supersonic flows have been implemented into this framework; the Doerffer/Bohning law and the Slater formulation. These models were applied to unit problems that contain key aspects of the flow physics applicable to bleed systems designed for hypersonic air-breathing propulsion systems. The ability of each model to predict bulk bleed properties was assessed, as well as the response of the boundary layer as it passes through and downstream of a porous bleed system. The model assessment was performed with and without the presence of shock waves. Three-dimensional CFD simulations that included the geometric details of the porous plate bleed systems were also carried out to supplement the experimental data, and provide additional insights into the bleed flow physics. Overall, both bleed formulations fared well for the tests performed in this study. However, the sample of test problems considered in this effort was not large enough to permit a comprehensive validation of the models.
The hospital incident command system: modified model for hospitals in iran.
Djalali, Ahmadreza; Hosseinijenab, Vahid; Peyravi, Mahmoudreza; Nekoei-Moghadam, Mahmood; Hosseini, Bashir; Schoenthal, Lisa; Koenig, Kristi L
2015-03-27
Effectiveness of hospital management of disasters requires a well-defined and rehearsed system. The Hospital Incident Command System (HICS), as a standardized method for command and control, was established in Iranian hospitals, but it has performed fairly during disaster exercises. This paper describes the process for, and modifications to HICS undertaken to optimize disaster management in hospitals in Iran. In 2013, a group of 11 subject matter experts participated in an expert consensus modified Delphi to develop modifications to the 2006 version of HICS. The following changes were recommended by the expert panel and subsequently implemented: 1) A Quality Control Officer was added to the Command group; 2) Security was defined as a new section; 3) Infrastructure and Business Continuity Branches were moved from the Operations Section to the Logistics and the Administration Sections, respectively; and 4) the Planning Section was merged within the Finance/Administration Section. An expert consensus group developed a modified HICS that is more feasible to implement given the managerial organization of hospitals in Iran. This new model may enhance hospital performance in managing disasters. Additional studies are needed to test the feasibility and efficacy of the modified HICS in Iran, both during simulations and actual disasters. This process may be a useful model for other countries desiring to improve disaster incident management systems for their hospitals.
Using SysML to model complex systems for security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
Study on data model of large-scale urban and rural integrated cadastre
NASA Astrophysics Data System (ADS)
Peng, Liangyong; Huang, Quanyi; Gao, Dequan
2008-10-01
Urban and Rural Integrated Cadastre (URIC) has been the subject of great interests for modern cadastre management. It is highly desirable to develop a rational data model for establishing an information system of URIC. In this paper, firstly, the old cadastral management mode in China was introduced, the limitation was analyzed, and the conception of URIC and its development course in China were described. Afterwards, based on the requirements of cadastre management in developed region, the goal of URIC and two key ideas for realizing URIC were proposed. Then, conceptual management mode was studied and a data model of URIC was designed. At last, based on the raw data of land use survey with a scale of 1:1000 and urban conversional cadastral survey with a scale of 1:500 in Jiangyin city, a well-defined information system of URIC was established according to the data model and an uniform management of land use and use right and landownership in urban and rural area was successfully realized. Its feasibility and practicability was well proved.
Mouse and Guinea Pig Models of Tuberculosis.
Orme, Ian M; Ordway, Diane J
2016-08-01
This article describes the nature of the host response to Mycobacterium tuberculosis in the mouse and guinea pig models of infection. It describes the great wealth of information obtained from the mouse model, reflecting the general availability of immunological reagents, as well as genetic manipulations of the mouse strains themselves. This has led to a good understanding of the nature of the T-cell response to the infection, as well as an appreciation of the complexity of the response involving multiple cytokine- and chemokine-mediated systems. As described here and elsewhere, we have a growing understanding of how multiple CD4-positive T-cell subsets are involved, including regulatory T cells, TH17 cells, as well as the subsequent emergence of effector and central memory T-cell subsets. While, in contrast, our understanding of the host response in the guinea pig model is less advanced, considerable strides have been made in the past decade in terms of defining the basis of the immune response, as well as a better understanding of the immunopathologic process. This model has long been the gold standard for vaccine testing, and more recently is being revisited as a model for testing new drug regimens (bedaquiline being the latest example).
Defect measurement and analysis of JPL ground software: a case study
NASA Technical Reports Server (NTRS)
Powell, John D.; Spagnuolo, John N., Jr.
2004-01-01
Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.
Estimating fracture spacing from natural tracers in shale-gas production
NASA Astrophysics Data System (ADS)
Bauer, S. J.; McKenna, S. A.; Heath, J. E.; Gardner, P.
2012-12-01
Resource appraisal and long-term recovery potential of shale gas relies on the characteristics of the fracture networks created within the formation. Both well testing and analysis of micro-seismic data can provide information on fracture characteristics, but approaches that directly utilize observations of gas transport through the fractures are not well-developed. We examine transport of natural tracers and analyze the breakthrough curves (BTC's) of these tracers with a multi-rate mass transfer (MMT) model to elucidate fracture characteristics. The focus here is on numerical simulation studies to determine constraints on the ability to accurately estimate fracture network characteristics as a function of the diffusion coefficients of the natural tracers, the number and timing of observations, the flow rates from the well, and the noise in the observations. Traditional tracer testing approaches for dual-porosity systems analyze the BTC of an injected tracer to obtain fracture spacing considering a single spacing value. An alternative model is the MMT model where diffusive mass transfer occurs simultaneously over a range of matrix block sizes defined by a statistical distribution (e.g., log-normal, gamma, or power-law). The goal of the estimation is defining the parameters of the fracture spacing distribution. The MMT model has not yet been applied to analysis of natural in situ natural tracers. Natural tracers are omnipresent in the subsurface, potentially obviating the needed for introduced tracers, and could be used to improve upon fracture characteristics estimated from pressure transient and decline curve production analysis. Results of this study provide guidance for data collection and analysis of natural tracers in fractured shale formations. Parameter estimation on simulated BTC's will provide guidance on the necessary timing of BTC sampling in field experiments. The MMT model can result in non-unique or nonphysical parameter estimates. We address this with Bayesian estimation approaches that can define uncertainty in estimated parameters as a posterior probability distribution. We will also use Bayesian estimation to examine model identifiability (e.g., selecting between parametric distributions of fracture spacing) from various BTC's. Application of the MMT model to natural tracers and hydraulic fractures in shale will require extension of the model to account for partitioning of the tracers between multiple phases and different mass transfer behavior in mixed gas-liquid (e.g., oil or groundwater rich) systems. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Summary of the white paper of DICOM WG24 'DICOM in Surgery'
NASA Astrophysics Data System (ADS)
Lemke, Heinz U.
2007-03-01
Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient Operating Room (OR). The DICOM Working Group 24 (WG24) has been established to develop DICOM objects and services related to Image Guided Surgery (IGS). To determine these standards, it is important to define day-to-day, step-by-step surgical workflow practices and create surgery workflow models per procedures or per variable cases. A well-defined workflow and a high fidelity patient model will be the base of activities for both, radiation therapy and surgery. Considering the present and future requirements for surgical planning and intervention, such a patient model must be n-dimensional, were n may include the spatial and temporal dimensions as well as a number of functional variables. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG24 should, therefore, also be to serve the therapeutic disciplines by enabling modelling technology to be based on standards.
Simulation-based instruction of technical skills
NASA Technical Reports Server (NTRS)
Towne, Douglas M.; Munro, Allen
1991-01-01
A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.
DEAN: A program for dynamic engine analysis
NASA Technical Reports Server (NTRS)
Sadler, G. G.; Melcher, K. J.
1985-01-01
The Dynamic Engine Analysis program, DEAN, is a FORTRAN code implemented on the IBM/370 mainframe at NASA Lewis Research Center for digital simulation of turbofan engine dynamics. DEAN is an interactive program which allows the user to simulate engine subsystems as well as a full engine systems with relative ease. The nonlinear first order ordinary differential equations which define the engine model may be solved by one of four integration schemes, a second order Runge-Kutta, a fourth order Runge-Kutta, an Adams Predictor-Corrector, or Gear's method for still systems. The numerical data generated by the model equations are displayed at specified intervals between which the user may choose to modify various parameters affecting the model equations and transient execution. Following the transient run, versatile graphics capabilities allow close examination of the data. DEAN's modeling procedure and capabilities are demonstrated by generating a model of simple compressor rig.
Cosmic infinity: a dynamical system approach
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Marto, João; Morais, João; Silva, César M.
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identify normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.
Beyond the rhetoric: what do we mean by a 'model of care'?
Davidson, Patricia; Halcomb, Elizabeth; Hickman, L; Phillips, J; Graham, B
2006-01-01
Contemporary health care systems are constantly challenged to revise traditional methods of health care delivery. These challenges are multifaceted and stem from: (1) novel pharmacological and non-pharmacological treatments; (2) changes in consumer demands and expectations; (3) fiscal and resource constraints; (4) changes in societal demographics in particular the ageing of society; (5) an increasing burden of chronic disease; (6) documentation of limitations in traditional health care delivery; (7) increased emphasis on transparency, accountability, evidence-based practice (EBP) and clinical governance structures; and (8) the increasing cultural diversity of the community. These challenges provoke discussion of potential alternative models of care, with scant reference to defining what constitutes a model of care. This paper aims to define what is meant by the term 'model of care' and document the pragmatic systems and processes necessary to develop, plan, implement and evaluate novel models of care delivery. Searches of electronic databases, the reference lists of published materials, policy documents and the Internet were conducted using key words including 'model*', 'framework*', 'models, theoretical' and 'nursing models, theoretical'. The collated material was then analysed and synthesised into this review. This review determined that in addition to key conceptual and theoretical perspectives, quality improvement theory (eg. collaborative methodology), project management methods and change management theory inform both pragmatic and conceptual elements of a model of care. Crucial elements in changing health care delivery through the development of innovative models of care include the planning, development, implementation, evaluation and assessment of the sustainability of the new model. Regardless of whether change in health care delivery is attempted on a micro basis (eg. ward level) or macro basis (eg. national or state system) in order to achieve sustainable, effective and efficient changes a well-planned, systematic process is essential.
Temporal correlations in the Vicsek model with vectorial noise
NASA Astrophysics Data System (ADS)
Gulich, Damián; Baglietto, Gabriel; Rozenfeld, Alejandro F.
2018-07-01
We study the temporal correlations in the evolution of the order parameter ϕ(t) for the Vicsek model with vectorial noise by estimating its Hurst exponent H with detrended fluctuation analysis (DFA). We present results on this parameter as a function of noise amplitude η introduced in simulations. We also compare with well known order-disorder phase transition for that same noise range. We find that - regardless of detrending degree - H spikes at the known coexistence noise for phase transition, and that this is due to nonstationarities introduced by the transit of the system between two well defined states with lower exponents. We statistically support this claim by successfully synthesizing equivalent cases derived from a transformed fractional Brownian motion (TfBm).
Kullback-Leibler divergence measure of intermittency: Application to turbulence
NASA Astrophysics Data System (ADS)
Granero-Belinchón, Carlos; Roux, Stéphane G.; Garnier, Nicolas B.
2018-01-01
For generic systems exhibiting power law behaviors, and hence multiscale dependencies, we propose a simple tool to analyze multifractality and intermittency, after noticing that these concepts are directly related to the deformation of a probability density function from Gaussian at large scales to non-Gaussian at smaller scales. Our framework is based on information theory and uses Shannon entropy and Kullback-Leibler divergence. We provide an extensive application to three-dimensional fully developed turbulence, seen here as a paradigmatic complex system where intermittency was historically defined and the concepts of scale invariance and multifractality were extensively studied and benchmarked. We compute our quantity on experimental Eulerian velocity measurements, as well as on synthetic processes and phenomenological models of fluid turbulence. Our approach is very general and does not require any underlying model of the system, although it can probe the relevance of such a model.
NASA Astrophysics Data System (ADS)
Ladner, S. D.; Arnone, R.; Casey, B.; Weidemann, A.; Gray, D.; Shulman, I.; Mahoney, K.; Giddings, T.; Shirron, J.
2009-05-01
Current United States Navy Mine-Counter-Measure (MCM) operations primarily use electro-optical identification (EOID) sensors to identify underwater targets after detection via acoustic sensors. These EOID sensors which are based on laser underwater imaging by design work best in "clear" waters and are limited in coastal waters especially with strong optical layers. Optical properties and in particular scattering and absorption play an important role on systems performance. Surface optical properties alone from satellite are not adequate to determine how well a system will perform at depth due to the existence of optical layers. The spatial and temporal characteristics of the 3d optical variability of the coastal waters along with strength and location of subsurface optical layers maximize chances of identifying underwater targets by exploiting optimum sensor deployment. Advanced methods have been developed to fuse the optical measurements from gliders, optical properties from "surface" satellite snapshot and 3-D ocean circulation models to extend the two-dimensional (2-D) surface satellite optical image into a three-dimensional (3-D) optical volume with subsurface optical layers. Modifications were made to an EOID performance model to integrate a 3-D optical volume covering an entire region of interest as input and derive system performance field. These enhancements extend present capability based on glider optics and EOID sensor models to estimate the system's "image quality". This only yields system performance information for a single glider profile location in a very large operational region. Finally, we define the uncertainty of the system performance by coupling the EOID performance model with the 3-D optical volume uncertainties. Knowing the ensemble spread of EOID performance field provides a new and unique capability for tactical decision makers and Navy Operations.
3D Model of the Tuscarora Geothermal Area
Faulds, James E.
2013-12-31
The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.
Chang'E-3 data pre-processing system based on scientific workflow
NASA Astrophysics Data System (ADS)
tan, xu; liu, jianjun; wang, yuanyuan; yan, wei; zhang, xiaoxia; li, chunlai
2016-04-01
The Chang'E-3(CE3) mission have obtained a huge amount of lunar scientific data. Data pre-processing is an important segment of CE3 ground research and application system. With a dramatic increase in the demand of data research and application, Chang'E-3 data pre-processing system(CEDPS) based on scientific workflow is proposed for the purpose of making scientists more flexible and productive by automating data-driven. The system should allow the planning, conduct and control of the data processing procedure with the following possibilities: • describe a data processing task, include:1)define input data/output data, 2)define the data relationship, 3)define the sequence of tasks,4)define the communication between tasks,5)define mathematical formula, 6)define the relationship between task and data. • automatic processing of tasks. Accordingly, Describing a task is the key point whether the system is flexible. We design a workflow designer which is a visual environment for capturing processes as workflows, the three-level model for the workflow designer is discussed:1) The data relationship is established through product tree.2)The process model is constructed based on directed acyclic graph(DAG). Especially, a set of process workflow constructs, including Sequence, Loop, Merge, Fork are compositional one with another.3)To reduce the modeling complexity of the mathematical formulas using DAG, semantic modeling based on MathML is approached. On top of that, we will present how processed the CE3 data with CEDPS.
Dense Tracking and Mapping with a Quadrocopter
NASA Astrophysics Data System (ADS)
Sturm, J.; Bylow, E.; Kerl, C.; Kahl, F.; Cremers, D.
2013-08-01
In this paper, we present an approach for acquiring textured 3D models of room-sized indoor spaces using a quadrocopter. Such room models are for example useful for architects and interior designers as well as for factory planners and construction managers. The model is internally represented by a signed distance function (SDF) and the SDF is used to directly track the camera with respect to the model. Our solution enables accurate position control of the quadrocopter, so that it can automatically follow a pre-defined flight pattern. Our system provides live feedback of the acquired 3D model to the user. The final model consisting of a textured 3D triangle mesh can be saved in several standard CAD file formats.
On the formalization of multi-scale and multi-science processes for integrative biology
Díaz-Zuccarini, Vanessa; Pichardo-Almarza, César
2011-01-01
The aim of this work is to introduce the general concept of ‘Bond Graph’ (BG) techniques applied in the context of multi-physics and multi-scale processes. BG modelling has a natural place in these developments. BGs are inherently coherent as the relationships defined between the ‘elements’ of the graph are strictly defined by causality rules and power (energy) conservation. BGs clearly show how power flows between components of the systems they represent. The ‘effort’ and ‘flow’ variables enable bidirectional information flow in the BG model. When the power level of a system is low, BGs degenerate into signal flow graphs in which information is mainly one-dimensional and power is minimal, i.e. they find a natural limitation when dealing with populations of individuals or purely kinetic models, as the concept of energy conservation in these systems is no longer relevant. The aim of this work is twofold: on the one hand, we will introduce the general concept of BG techniques applied in the context of multi-science and multi-scale models and, on the other hand, we will highlight some of the most promising features in the BG methodology by comparing with examples developed using well-established modelling techniques/software that could suggest developments or refinements to the current state-of-the-art tools, by providing a consistent framework from a structural and energetic point of view. PMID:22670211
Fishing anti(lymph)angiogenic drugs with zebrafish.
García-Caballero, Melissa; Quesada, Ana R; Medina, Miguel A; Marí-Beffa, Manuel
2018-02-01
Zebrafish, an amenable small teleost fish with a complex mammal-like circulatory system, is being increasingly used for drug screening and toxicity studies. It combines the biological complexity of in vivo models with a higher-throughput screening capability compared with other available animal models. Externally growing, transparent embryos, displaying well-defined blood and lymphatic vessels, allow the inexpensive, rapid, and automatable evaluation of drug candidates that are able to inhibit neovascularisation. Here, we briefly review zebrafish as a model for the screening of anti(lymph)angiogenic drugs, with emphasis on the advantages and limitations of the different zebrafish-based in vivo assays. Copyright © 2017 Elsevier Ltd. All rights reserved.
Blood-Siegfried, Jane
2015-01-01
Sudden infant death syndrome (SIDS) is still not well understood. It is defined as the sudden and unexpected death of an infant without a definitive cause. There are numerous hypotheses about the etiology of SIDS but the exact cause or causes have never been pinpointed. Examination of theoretical pathologies might only be possible in animal models. Development of these models requires consideration of the environmental and/or developmental risk factors often associated with SIDS, as they need to explain how the risk factors could contribute to the cause of death. These models were initially developed in common laboratory animals to test various hypotheses to explain these infant deaths – guinea pig, piglet, mouse, neonatal rabbit, and neonatal rat. Currently, there are growing numbers of researchers using genetically altered animals to examine specific areas of interest. This review describes the different systems and models developed to examine the diverse hypotheses for the cause of SIDS and their potential for defining a causal mechanism or mechanisms. PMID:25870597
Grembowski, David; Schaefer, Judith; Johnson, Karin E; Fischer, Henry; Moore, Susan L; Tai-Seale, Ming; Ricciardi, Richard; Fraser, James R; Miller, Donald; LeRoy, Lisa
2014-03-01
Effective healthcare for people with multiple chronic conditions (MCC) is a US priority, but the inherent complexity makes both research and delivery of care particularly challenging. As part of AHRQ Multiple Chronic Conditions Research Network (MCCRN) efforts, the Network developed a conceptual model to guide research in this area. To synthesize methodological and topical issues relevant to MCC patient care into a framework that can improve the delivery of care and advance future research about caring for patients with MCC. The Network synthesized essential constructs for MCC research identified from roundtable discussion, input from expert advisors, and previously published models. The AHRQ MCCRN conceptual model defines complexity as the gap between patient needs and healthcare services, taking into account both the multiple considerations that affect the needs of MCC patients, as well as the contextual factors that influence service delivery. The model reframes processes and outcomes to include not only clinical care quality and experience, but also patient health, well being, and quality of life. The single-condition paradigm for treating needs one-by-one falls apart and highlights the need for care systems to address dynamic patient needs. Defining complexity in terms of the misalignment between patient needs and services offers new insights in how to research and develop solutions to patient care needs.
Pathologists' roles in clinical utilization management. A financing model for managed care.
Zhao, J J; Liberman, A
2000-03-01
In ancillary or laboratory utilization management, the roles of pathologists have not been explored fully in managed care systems. Two possible reasons may account for this: pathologists' potential contributions have not been defined clearly, and effective measurement of and reasonable compensation for the pathologist's contribution remains vague. The responsibilities of pathologists in clinical practice may include clinical pathology and laboratory services (which have long been well-defined and are compensated according to a resource-based relative value system-based coding system), laboratory administration, clinical utilization management, and clinical research. Although laboratory administration services have been compensated with mechanisms such as percentage of total service revenue or fixed salary, the involvement of pathologists seems less today than in the past, owing to increased clinical workload and time constraints in an expanding managed care environment, especially in community hospital settings. The lack of financial incentives or appropriate compensation mechanisms for the services likely accounts for the current situation. Furthermore, the importance of pathologist-driven utilization management in laboratory services lacks recognition among hospital administrators, managed care executives, and pathologists themselves, despite its potential benefits for reducing cost and enhancing quality of care. We propose a financial compensation model for such services and summarize its advantages.
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.; Mosher, R. A.; Bier, M.; Saville, D. A.
1981-01-01
A mathematical model of isoelectric focusing at the steady state has been developed for an M-component system of electrochemically defined ampholytes. The model is formulated from fundamental principles describing the components' chemical equilibria, mass transfer resulting from diffusion and electromigration, and electroneutrality. The model consists of ordinary differential equations coupled with a system of algebraic equations. The model is implemented on a digital computer using FORTRAN-based simulation software. Computer simulation data are presented for several two-component systems showing the effects of varying the isoelectric points and dissociation constants of the constituents.
Design and Application of an Ontology for Component-Based Modeling of Water Systems
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2012-12-01
Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.
An agent-based computational model of the spread of tuberculosis
NASA Astrophysics Data System (ADS)
de Espíndola, Aquino L.; Bauch, Chris T.; Troca Cabella, Brenno C.; Souto Martinez, Alexandre
2011-05-01
In this work we propose an alternative model of the spread of tuberculosis (TB) and the emergence of drug resistance due to the treatment with antibiotics. We implement the simulations by an agent-based model computational approach where the spatial structure is taken into account. The spread of tuberculosis occurs according to probabilities defined by the interactions among individuals. The model was validated by reproducing results already known from the literature in which different treatment regimes yield the emergence of drug resistance. The different patterns of TB spread can be visualized at any time of the system evolution. The implementation details as well as some results of this alternative approach are discussed.
Motivation and Gender in the Japanese EFL Classroom
ERIC Educational Resources Information Center
Mori, Setsuko; Gobel, Peter
2006-01-01
In the field of SLA, there have been various attempts to define second language learning motivation and to discover relationships between motivation and gender. Using two well-known motivational models: Expectancy-value theory, and Gardner's Socio-educational model, the present study sought to (1) first define foreign language learning motivation…
Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Final Report Systems Level Analysis of the Function and Adaptive Responses of Methanogenic Consortia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovley, Derek R.
The purpose of this research was to determine whether the syntrophic microbial associations that are central to the functioning of methane-producing terrestrial wetlands can be predictively modeled with coupled multi-species genome-scale metabolic models. Such models are important because methane is an important greenhouse gas and there is a need to predictively model how the methane-producing microbial communities will respond to environmental perturbations, such as global climate change. The research discovered that the most prodigious methane-producing microorganisms on earth participate in a previously unrecognized form of energy exchange. The methane-producers Methanosaeta and Methanosarcina forge biological electrical connections with other microbes inmore » order to obtain electrons to reduce carbon dioxide to methane. This direct interspecies electron transfer (DIET) was demonstrated in complex microbial communities as well as in defined co-cultures. For example, metatranscriptomic analysis of gene expression in both natural communities and defined co-cultures demonstrated that Methanosaeta species highly expressed genes for the enzymes for the reduction of carbon dioxide to methane. Furthermore, Methanosaeta’s electron-donating partners highly expressed genes for the biological electrical connections known as microbial nanowires. A series of studies involving transcriptomics, genome resequencing, and analysis of the metabolism of a series of strains with targeted gene deletions, further elucidated the mechanisms and energetics of DIET in methane-producing co-cultures, as well as in a co-culture of Geobacter metallireducens and Geobacter sulfurreducens, which provided a system for studying DIET with two genetically tractable partners. Genome-scale modeling of DIET in the G. metallireducens/G. sulfurreducens co-culture suggested that DIET provides more energy to the electron-donating partner that electron exchange via interspecies hydrogen transfer, but that the performance of DIET may be strongly influenced by environmental factors. These studies have significantly modified conceptual models for carbon and electron flow in methane-producing environments and have developed a computational framework for predictive modeling the influence of environmental perturbations on methane-producing microbial communities. The results have important implications for modeling the response of methane-producing microbial communities to climate change as well as for the bioenergy strategy of converting wastes and biomass to methane.« less
Improved Cook-off Modeling of Multi-component Cast Explosives
NASA Astrophysics Data System (ADS)
Nichols, Albert
2017-06-01
In order to understand the hazards associated with energetic materials, it is important to understand their behavior in adverse thermal environments. These processes have been relatively well understood for solid explosives, however, the same cannot be said for multi-component melt-cast explosives. Here we describe the continued development of ALE3D, a coupled thermal/chemical/mechanical code, to improve its description of fluid explosives. The improved physics models include: 1) Chemical potential driven species segregation. This model allows us to model the complex flow fields associated with the melting and decomposing Comp-B, where the denser RDX tends to settle and the decomposing gasses rise, 2) Automatically scaled stream-wise diffusion model for thermal, species, and momentum diffusion. These models add sufficient numerical diffusion in the direction of flow to maintain numerical stability when the system is under resolved, as occurs for large systems. And 3) a slurry viscosity model, required to properly define the flow characteristics of the multi-component fluidized system. These models will be demonstrated on a simple Comp-B system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
A reference model for space data system interconnection services
NASA Astrophysics Data System (ADS)
Pietras, John; Theis, Gerhard
1993-03-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
A reference model for space data system interconnection services
NASA Technical Reports Server (NTRS)
Pietras, John; Theis, Gerhard
1993-01-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
On scattered waves and lipid domains: detecting membrane rafts with X-rays and neutrons
Marquardt, Drew; Heberle, Frederick A.; Nickels, Jonathan D.; ...
2015-09-21
In order to understand the biological role of lipids in cell membranes, it is necessary to determine the mesoscopic structure of well-defined model membrane systems. Neutron and X-ray scattering are non-invasive, probe-free techniques that have been used extensively in such systems to probe length scales ranging from angstroms to microns, and dynamics occurring over picosecond to millisecond time scales. Finally, recent developments in the area of phase separated lipid systems mimicking membrane rafts will be presented, and the underlying concepts of the different scattering techniques used to study them will be discussed in detail.
Koda, Shin-ichi
2016-03-21
We theoretically investigate a possibility that the symmetry of the repetitively branched structure of light-harvesting dendrimers creates the energy gradient descending toward inner generations (layers of pigment molecules) of the dendrimers. In the first half of this paper, we define a model system using the Frenkel exciton Hamiltonian that focuses only on the topology of dendrimers and numerically show that excitation energy tends to gather at inner generations of the model system at a thermal equilibrium state. This indicates that an energy gradient is formed in the model system. In the last half, we attribute this result to the symmetry of the model system and propose two symmetry-origin mechanisms creating the energy gradient. The present analysis and proposition are based on the theory of the linear chain (LC) decomposition [S. Koda, J. Chem. Phys. 142, 204112 (2015)], which equivalently transforms the model system into a set of one-dimensional systems on the basis of the symmetry of dendrimers. In the picture of the LC decomposition, we find that energy gradient is formed both in each linear chain and among linear chains, and these two mechanisms explain the numerical results well.
NASA Astrophysics Data System (ADS)
Koda, Shin-ichi
2016-03-01
We theoretically investigate a possibility that the symmetry of the repetitively branched structure of light-harvesting dendrimers creates the energy gradient descending toward inner generations (layers of pigment molecules) of the dendrimers. In the first half of this paper, we define a model system using the Frenkel exciton Hamiltonian that focuses only on the topology of dendrimers and numerically show that excitation energy tends to gather at inner generations of the model system at a thermal equilibrium state. This indicates that an energy gradient is formed in the model system. In the last half, we attribute this result to the symmetry of the model system and propose two symmetry-origin mechanisms creating the energy gradient. The present analysis and proposition are based on the theory of the linear chain (LC) decomposition [S. Koda, J. Chem. Phys. 142, 204112 (2015)], which equivalently transforms the model system into a set of one-dimensional systems on the basis of the symmetry of dendrimers. In the picture of the LC decomposition, we find that energy gradient is formed both in each linear chain and among linear chains, and these two mechanisms explain the numerical results well.
FOSSIL2 energy policy model documentation: FOSSIL2 documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1980-10-01
This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume II provides the model equations with each of their variables defined, while Volume III lists the equations, and a one line definition for equations, in a shorter, more readable format.« less
Agent-based modeling as a tool for program design and evaluation.
Lawlor, Jennifer A; McGirr, Sara
2017-12-01
Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.
Information system life-cycle and documentation standards, volume 1
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
The Software Management and Assurance Program (SMAP) Information System Life-Cycle and Documentation Standards Document describes the Version 4 standard information system life-cycle in terms of processes, products, and reviews. The description of the products includes detailed documentation standards. The standards in this document set can be applied to the life-cycle, i.e., to each phase in the system's development, and to the documentation of all NASA information systems. This provides consistency across the agency as well as visibility into the completeness of the information recorded. An information system is software-intensive, but consists of any combination of software, hardware, and operational procedures required to process, store, or transmit data. This document defines a standard life-cycle model and content for associated documentation.
Local Difference Measures between Complex Networks for Dynamical System Model Evaluation
Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374
Local difference measures between complex networks for dynamical system model evaluation.
Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen
2015-01-01
A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.
Fostering Recovery from Life-Transforming Mental Health Disorders: A Synthesis and Model
Green, Carla A.
2012-01-01
In the past, “recovery” from serious mental health problems has been variously defined and generally considered rare. Current evidence suggests that some form of recovery is both possible and common, yet we know little about the processes that differentiate those who recover from those who do not. This paper discusses approaches to defining recovery, proposes a model for fostering, understanding, and studying recovery, and suggests questions for clinicians, researchers, and policy makers. The proposed model is a synthesis of work from the field of mental health as well as from other disciplines. Environment, resources, and strains, provide the backdrop for recovery; core recovery processes include development, learning, healing, and their primary behavioral manifestation, adaptation. Components facilitating recovery include sources of motivation (hope, optimism, and meaning), prerequisites for action (agency, control, and autonomy), and capacity (competence and dysfunction). Attending to these aspects of the recovery process could help shape clinical practice, and systems that provide and finance mental health care, in ways that promote recovery. PMID:23264751
Assessing the performance of regional landslide early warning models: the EDuMaP method
NASA Astrophysics Data System (ADS)
Calvello, M.; Piciullo, L.
2015-10-01
The paper proposes the evaluation of the technical performance of a regional landslide early warning system by means of an original approach, called EDuMaP method, comprising three successive steps: identification and analysis of the Events (E), i.e. landslide events and warning events derived from available landslides and warnings databases; definition and computation of a Duration Matrix (DuMa), whose elements report the time associated with the occurrence of landslide events in relation to the occurrence of warning events, in their respective classes; evaluation of the early warning model Performance (P) by means of performance criteria and indicators applied to the duration matrix. During the first step, the analyst takes into account the features of the warning model by means of ten input parameters, which are used to identify and classify landslide and warning events according to their spatial and temporal characteristics. In the second step, the analyst computes a time-based duration matrix having a number of rows and columns equal to the number of classes defined for the warning and landslide events, respectively. In the third step, the analyst computes a series of model performance indicators derived from a set of performance criteria, which need to be defined by considering, once again, the features of the warning model. The proposed method is based on a framework clearly distinguishing between local and regional landslide early warning systems as well as among correlation laws, warning models and warning systems. The applicability, potentialities and limitations of the EDuMaP method are tested and discussed using real landslides and warnings data from the municipal early warning system operating in Rio de Janeiro (Brazil).
Design of a reliable and operational landslide early warning system at regional scale
NASA Astrophysics Data System (ADS)
Calvello, Michele; Piciullo, Luca; Gariano, Stefano Luigi; Melillo, Massimo; Brunetti, Maria Teresa; Peruccacci, Silvia; Guzzetti, Fausto
2017-04-01
Landslide early warning systems at regional scale are used to warn authorities, civil protection personnel and the population about the occurrence of rainfall-induced landslides over wide areas, typically through the prediction and measurement of meteorological variables. A warning model for these systems must include a regional correlation law and a decision algorithm. A regional correlation law can be defined as a functional relationship between rainfall and landslides; it is typically based on thresholds of rainfall indicators (e.g., cumulated rainfall, rainfall duration) related to different exceedance probabilities of landslide occurrence. A decision algorithm can be defined as a set of assumptions and procedures linking rainfall thresholds to warning levels. The design and the employment of an operational and reliable early warning system for rainfall-induced landslides at regional scale depend on the identification of a reliable correlation law as well as on the definition of a suitable decision algorithm. Herein, a five-step process chain addressing both issues and based on rainfall thresholds is proposed; the procedure is tested in a landslide-prone area of the Campania region in southern Italy. To this purpose, a database of 96 shallow landslides triggered by rainfall in the period 2003-2010 and rainfall data gathered from 58 rain gauges are used. First, a set of rainfall thresholds are defined applying a frequentist method to reconstructed rainfall conditions triggering landslides in the test area. In the second step, several thresholds at different exceedance probabilities are evaluated, and different percentile combinations are selected for the activation of three warning levels. Subsequently, within steps three and four, the issuing of warning levels is based on the comparison, over time and for each combination, between the measured rainfall and the pre-defined warning level thresholds. Finally, the optimal percentile combination to be employed in the regional early warning system is selected evaluating the model performance in terms of success and error indicators by means of the "event, duration matrix, performance" (EDuMaP) method.
A Calculus for Boxes and Traits in a Java-Like Setting
NASA Astrophysics Data System (ADS)
Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan
The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.
Consistent integration of experimental and ab initio data into molecular and coarse-grained models
NASA Astrophysics Data System (ADS)
Vlcek, Lukas
As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
NASA Astrophysics Data System (ADS)
Dontsov, Dmitry; Yushkova, Natalia
2017-01-01
The paper is aimed at detecting conceptual conflicts within the architectural and urban construction activity (AUCA), defining their reasons and substantiating ways to decrease adverse effects they caused. Methods of causes and effects analyses are used, as well as evolutional and comparative analyses. They allow defining the laws to form activity model in modern environment, whose elements are ranked. Relevance of the paper is based on defining scientific and theoretical grounds of necessity to improve methodology of AUCA via its adaption to the imperatives of state management. System analyses enabled to prove practicability of considering factors of institution environment for reorganization of the model of AUCA, which provide the fullest implementation of sustainable development principles. It was proved that territorial planning is not only the leading type of AUCA, but also integrator for functioning structures of state management within planning of social and economic development. As main result of the paper consist in detection of the perspective ways for evolution of modern methodology due to increasing interdisciplinary aspect leading to the qualitative renewal of territorial management principles.
A method for defining value in healthcare using cancer care as a model.
Feeley, Thomas W; Fly, Helen Shafer; Albright, Heidi; Walters, Ronald; Burke, Thomas W
2010-01-01
Value-based healthcare delivery is being discussed in a variety of healthcare forums. This concept is of great importance in the reform of the US healthcare delivery system. Defining and applying the principles of value-based competition in healthcare delivery models will permit future evaluation of various delivery applications. However, there are relatively few examples of how to apply these principles to an existing care delivery system. In this article, we describe an approach for assessing the value created when treating cancer patients in a multidisciplinary care setting within a comprehensive cancer center. We describe the analysis of a multidisciplinary care center that treats head and neck cancers, and we attempt to examine how this center integrates with Porter and Teisberg's (2006) concept of value-based competition based on the results analysis. Using the relationship between outcomes and costs as the definition of value, we developed a methodology to analyze proposed outcomes for a population of patients treated using a multidisciplinary approach, and we matched those outcomes to the costs of the care provided. We present this work as a model for defining value for a subset of patients undergoing active treatment. The method can be applied not only to head and neck treatments, but to other modalities as well. Public reporting of this type of data for a variety of conditions can lead to improved competition in the healthcare marketplace and, as a result, improve outcomes and decrease health expenditures.
A Model of Biological Attacks on a Realistic Population
NASA Astrophysics Data System (ADS)
Carley, Kathleen M.; Fridsma, Douglas; Casman, Elizabeth; Altman, Neal; Chen, Li-Chiou; Kaminsky, Boris; Nave, Demian; Yahja, Alex
The capability to assess the impacts of large-scale biological attacks and the efficacy of containment policies is critical and requires knowledge-intensive reasoning about social response and disease transmission within a complex social system. There is a close linkage among social networks, transportation networks, disease spread, and early detection. Spatial dimensions related to public gathering places such as hospitals, nursing homes, and restaurants, can play a major role in epidemics [Klovdahl et. al. 2001]. Like natural epidemics, bioterrorist attacks unfold within spatially defined, complex social systems, and the societal and networked response can have profound effects on their outcome. This paper focuses on bioterrorist attacks, but the model has been applied to emergent and familiar diseases as well.
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.
Adaptive estimation of hand movement trajectory in an EEG based brain-computer interface system
NASA Astrophysics Data System (ADS)
Robinson, Neethu; Guan, Cuntai; Vinod, A. P.
2015-12-01
Objective. The various parameters that define a hand movement such as its trajectory, speed, etc, are encoded in distinct brain activities. Decoding this information from neurophysiological recordings is a less explored area of brain-computer interface (BCI) research. Applying non-invasive recordings such as electroencephalography (EEG) for decoding makes the problem more challenging, as the encoding is assumed to be deep within the brain and not easily accessible by scalp recordings. Approach. EEG based BCI systems can be developed to identify the neural features underlying movement parameters that can be further utilized to provide a detailed and well defined control command set to a BCI output device. A real-time continuous control is better suited for practical BCI systems, and can be achieved by continuous adaptive reconstruction of movement trajectory than discrete brain activity classifications. In this work, we adaptively reconstruct/estimate the parameters of two-dimensional hand movement trajectory, namely movement speed and position, from multi-channel EEG recordings. The data for analysis is collected by performing an experiment that involved center-out right-hand movement tasks in four different directions at two different speeds in random order. We estimate movement trajectory using a Kalman filter that models the relation between brain activity and recorded parameters based on a set of defined predictors. We propose a method to define these predictor variables that includes spatial, spectral and temporally localized neural information and to select optimally informative variables. Main results. The proposed method yielded correlation of (0.60 ± 0.07) between recorded and estimated data. Further, incorporating the proposed predictor subset selection, the correlation achieved is (0.57 ± 0.07, p {\\lt }0.004) with significant gain in stability of the system, as well as dramatic reduction in number of predictors (76%) for the savings of computational time. Significance. The proposed system provides a real time movement control system using EEG-BCI with control over movement speed and position. These results are higher and statistically significant compared to existing techniques in EEG based systems and thus promise the applicability of the proposed method for efficient estimation of movement parameters and for continuous motor control.
Allie-Ebrahim, Tariq; Zhu, Qingyu; Bräuer, Pierre; Moggridge, Geoff D; D'Agostino, Carmine
2017-06-21
The Maxwell-Stefan model is a popular diffusion model originally developed to model diffusion of gases, which can be considered thermodynamically ideal mixtures, although its application has been extended to model diffusion in non-ideal liquid mixtures as well. A drawback of the model is that it requires the Maxwell-Stefan diffusion coefficients, which are not based on measurable quantities but they have to be estimated. As a result, numerous estimation methods, such as the Darken model, have been proposed to estimate these diffusion coefficients. However, the Darken model was derived, and is only well defined, for binary systems. This model has been extended to ternary systems according to two proposed forms, one by R. Krishna and J. M. van Baten, Ind. Eng. Chem. Res., 2005, 44, 6939-6947 and the other by X. Liu, T. J. H. Vlugt and A. Bardow, Ind. Eng. Chem. Res., 2011, 50, 10350-10358. In this paper, the two forms have been analysed against the ideal ternary system of methanol/butan-1-ol/propan-1-ol and using experimental values of self-diffusion coefficients. In particular, using pulsed gradient stimulated echo nuclear magnetic resonance (PGSTE-NMR) we have measured the self-diffusion coefficients in various methanol/butan-1-ol/propan-1-ol mixtures. The experimental values of self-diffusion coefficients were then used as the input data required for the Darken model. The predictions of the two proposed multicomponent forms of this model were then compared to experimental values of mutual diffusion coefficients for the ideal alcohol ternary system. This experimental-based approach showed that the Liu's model gives better predictions compared to that of Krishna and van Baten, although it was only accurate to within 26%. Nonetheless, the multicomponent Darken model in conjunction with self-diffusion measurements from PGSTE-NMR represents an attractive method for a rapid estimation of mutual diffusion in multicomponent systems, especially when compared to exhaustive MD simulations.
Tests of Theories of Crime in Female Prisoners.
Lindberg, Marc A; Fugett, April; Adkins, Ashtin; Cook, Kelsey
2017-02-01
Several general theories of crime were tested with path models on 293 female prisoners in a U.S. State prison. The theories tested included Social Bond and Control, Thrill/Risk Seeking, and a new attachment-based Developmental Dynamic Systems model. A large battery of different instruments ranging from measures of risk taking, to a crime addiction scale, to Childhood Adverse Events, to attachments and clinical issues were used. The older general theories of crime did not hold up well under the rigor of path modeling. The new dynamic systems model was supported that incorporated adverse childhood events leading to (a) peer crime, (b) crime addiction, and (c) a measure derived from the Attachment and Clinical Issues Questionnaire (ACIQ) that takes individual differences in attachments and clinical issues into account. The results were discussed in terms of new approaches to Research Defined Criteria of Diagnosis (RDoC) and new approaches to intervention.
Use of fuzzy sets in modeling of GIS objects
NASA Astrophysics Data System (ADS)
Mironova, Yu N.
2018-05-01
The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
Keeping track of worm trackers.
Husson, Steven J; Costa, Wagner Steuer; Schmitt, Cornelia; Gottschalk, Alexander
2013-02-22
C. elegans is used extensively as a model system in the neurosciences due to its well defined nervous system. However, the seeming simplicity of this nervous system in anatomical structure and neuronal connectivity, at least compared to higher animals, underlies a rich diversity of behaviors. The usefulness of the worm in genome-wide mutagenesis or RNAi screens, where thousands of strains are assessed for phenotype, emphasizes the need for computational methods for automated parameterization of generated behaviors. In addition, behaviors can be modulated upon external cues like temperature, O(subscript)2(/subscript) and CO(subscript)2(/subscript) concentrations, mechanosensory and chemosensory inputs. Different machine vision tools have been developed to aid researchers in their efforts to inventory and characterize defined behavioral "outputs". Here we aim at providing an overview of different worm-tracking packages or video analysis tools designed to quantify different aspects of locomotion such as the occurrence of directional changes (turns, omega bends), curvature of the sinusoidal shape (amplitude, body bend angles) and velocity (speed, backward or forward movement).
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1980-01-01
Satellite configurations based on the Satellite Power System baseline requirements were analyzed and a preferred concept selected. A satellite construction base was defined, precursor operations incident to establishment of orbital support facilities identified, and the satellite construction sequence and procedures developed. Rectenna construction requirement were also addressed. Mass flow to orbit requirements were revised and traffic models established based on construction of 60 instead of 120 satellites. Analyses were conducted to determine satellite control, resources, manufacturing, and propellant requirements. The impact of the laser beam used for space-to-Earth power transmission upon the intervening atmosphere was examined as well as the inverse effect. The significant space environments and their effects on spacecraft components were investigated to define the design and operational limits imposed by the environments on an orbit transfer vehicle. The results show that LEO altitude 300 nmi and transfer orbit duration 6 months are preferrable.
NASA Technical Reports Server (NTRS)
Lightsey, W. D.; Alhorn, D. C.; Polites, M. E.
1992-01-01
An experiment designed to test the feasibility of using rotating unbalanced-mass (RUM) devices for line and raster scanning gimbaled payloads, while expending very little power is described. The experiment is configured for ground-based testing, but the scan concept is applicable to ground-based, balloon-borne, and space-based payloads, as well as free-flying spacecraft. The servos used in scanning are defined; the electronic hardware is specified; and a computer simulation model of the system is described. Simulation results are presented that predict system performance and verify the servo designs.
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Defining the end-point of mastication: A conceptual model.
Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E
2017-10-01
The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these properties define the end-point texture and enduring sensory perception of the food. © 2017 Wiley Periodicals, Inc.
MCNP capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less
Dipole-dipole resistivity survey of a portion of the Coso Hot Springs KGRA, Inyo County, California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, R.C.
1978-05-01
A detailed electrical resistivity survey of 54 line-km was completed at the Coso Hot Springs KGRA in September 1977. This survey has defined a bedrock resistivity low at least 4 sq mi (10 sq km) in extent associated with the geothermal system at Coso. The boundaries of this low are generally well defined to the north and west but not as well to the south where an approximate southern limit has been determined. The bedrock resistivity low merges with an observed resistivity low over gravel fill east of Coso Hot Springs. A complex horizontal and vertical resistivity structure of themore » surveyed area has been defined which precludes the use of layered-earth or two-dimensional interpretive models for much of the surveyed area. In general the survey data indicate that a 10 to 20 ohm-meter zone extends from near surface to a depth greater than 750 meters within the geothermal system. This zone is bordered to the north and west by bedrock resistivities greater than 200 ohm-meters and to the south by bedrock resistivities greater than 50 ohm-meters. A combination of observed increases in: (1) fracture density (higher permeability), (2) alteration (high clay content), and (3) temperatures (higher dissolved solid content of ground water) within the bedrock low explain its presence.« less
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
Formalization of the Access Control on ARM-Android Platform with the B Method
NASA Astrophysics Data System (ADS)
Ren, Lu; Wang, Wei; Zhu, Xiaodong; Man, Yujia; Yin, Qing
2018-01-01
ARM-Android is a widespread mobile platform with multi-layer access control mechanisms, security-critical in the system. Many access control vulnerabilities still exist due to the course-grained policy and numerous engineering defects, which have been widely studied. However, few researches focus on the mechanism formalization, including the Android permission framework, kernel process management and hardware isolation. This paper first develops a comprehensive formal access control model on the ARM-Android platform using the B method, from the Android middleware to hardware layer. All the model specifications are type checked and proved to be well-defined, with 75%of proof obligations demonstrated automatically. The results show that the proposed B model is feasible to specify and verify access control schemes in the ARM-Android system, and capable of implementing a practical control module.
NASA Glenn Wind Tunnel Model Systems Criteria
NASA Technical Reports Server (NTRS)
Soeder, Ronald H.; Roeder, James W.; Stark, David E.; Linne, Alan A.
2004-01-01
This report describes criteria for the design, analysis, quality assurance, and documentation of models that are to be tested in the wind tunnel facilities at the NASA Glenn Research Center. This report presents two methods for computing model allowable stresses on the basis of the yield stress or ultimate stress, and it defines project procedures to test models in the NASA Glenn aeropropulsion facilities. Both customer-furnished and in-house model systems are discussed. The functions of the facility personnel and customers are defined. The format for the pretest meetings, safety permit process, and model reviews are outlined. The format for the model systems report (a requirement for each model that is to be tested at NASA Glenn) is described, the engineers responsible for developing the model systems report are listed, and the timetable for its delivery to the project engineer is given.
Systemic risk in a unifying framework for cascading processes on networks
NASA Astrophysics Data System (ADS)
Lorenz, J.; Battiston, S.; Schweitzer, F.
2009-10-01
We introduce a general framework for models of cascade and contagion processes on networks, to identify their commonalities and differences. In particular, models of social and financial cascades, as well as the fiber bundle model, the voter model, and models of epidemic spreading are recovered as special cases. To unify their description, we define the net fragility of a node, which is the difference between its fragility and the threshold that determines its failure. Nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes, thus possibly triggering a cascade. In this framework, we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour. At the microscopic level, we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade, depending on its position in the network and its degree. At the macroscopic level, systemic risk is measured as the final fraction of failed nodes, X*, and for each of the three classes we derive a recursive equation to compute its value. The phase diagram of X* as a function of the initial conditions, thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes. We could identify which model class leads to a first-order phase transition in systemic risk, i.e. situations where small changes in the initial conditions determine a global failure. Eventually, we generalize our framework to encompass stochastic contagion models. This indicates the potential for further generalizations.
Progress in tropical isotope dendroclimatology
NASA Astrophysics Data System (ADS)
Evans, M. N.; Schrag, D. P.; Poussart, P. F.; Anchukaitis, K. J.
2005-12-01
The terrestrial tropics remain an important gap in the growing high resolution proxy network used to characterize the mean state and variability of the hydrological cycle. Here we review early efforts to develop a new class of proxy paleorainfall/humidity indicators using intraseasonal to interannual-resolution stable isotope data from tropical trees. The approach invokes a recently published model of oxygen isotopic composition of alpha-cellulose, rapid methods for cellulose extraction from raw wood, and continuous flow isotope ratio mass spectrometry to develop proxy chronological, rainfall and growth rate estimates from tropical trees, even those lacking annual rings. Isotopically-derived age models may be confirmed for modern intervals using trees of known age, radiocarbon measurements, direct measurements of tree diameter, and time series replication. Studies are now underway at a number of laboratories on samples from Costa Rica, northwestern coastal Peru, Indonesia, Thailand, New Guinea, Paraguay, Brazil, India, and the South American Altiplano. Improved sample extraction chemistry and online pyrolysis techniques should increase sample throughput, precision, and time series replication. Statistical calibration together with simple forward modeling based on the well-observed modern period can provide for objective interpretation of the data. Ultimately, replicated data series with well-defined uncertainties can be entered into multiproxy efforts to define aspects of tropical hydrological variability associated with ENSO, the meridional overturning circulation, and the monsoon systems.
Cosmic infinity: a dynamical system approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-López, Mariam; Marto, João; Morais, João
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identifymore » normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.« less
NASA Technical Reports Server (NTRS)
Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Vincent, Michael J.; Sturdy, James L.; Munoz, Cesar A.; Hoffler, Keith D.; Dutle, Aaron M.; Myer, Robert R.; Dehaven, Anna M.;
2017-01-01
As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The current NAS relies on pilot's vigilance and judgement to remain Well Clear (CFR 14 91.113) of other aircraft. RTCA SC-228 has defined DAA Well Clear (DAAWC) to provide a quantified Well Clear volume to allow systems to be designed and measured against. Extended research efforts have been conducted to understand and quantify system requirements needed to support a UAS pilot's ability to remain well clear of other aircraft. The efforts have included developing and testing sensor, algorithm, alerting, and display requirements. More recently, sensor uncertainty and uncertainty mitigation strategies have been evaluated. This paper discusses results and lessons learned from an End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS). NASA Langley Research Center (LaRC) was called upon to develop a system that evaluates a specific set of encounters, in a variety of geometries, with end-to-end DAA functionality including the use of sensor and tracker models, a sensor uncertainty mitigation model, DAA algorithmic guidance in both vertical and horizontal maneuvering, and a pilot model which maneuvers the ownship aircraft to remain well clear from intruder aircraft, having received collective input from the previous modules of the system. LaRC developed a functioning batch simulation and added a sensor/tracker model from the Federal Aviation Administration (FAA) William J. Hughes Technical Center, an in-house developed sensor uncertainty mitigation strategy, and implemented a pilot model similar to one from the Massachusetts Institute of Technology's Lincoln Laboratory (MIT/LL). The resulting simulation provides the following key parameters, among others, to evaluate the effectiveness of the MOPS DAA system: severity of loss of well clear (SLoWC), alert scoring, and number of increasing alerts (alert jitter). The technique, results, and lessons learned from a detailed examination of DAA system performance over specific test vectors and encounter cases during the simulation experiment will be presented in this paper.
The evolutionary language game: an orthogonal approach.
Lenaerts, Tom; Jansen, Bart; Tuyls, Karl; De Vylder, Bart
2005-08-21
Evolutionary game dynamics have been proposed as a mathematical framework for the cultural evolution of language and more specifically the evolution of vocabulary. This article discusses a model that is mutually exclusive in its underlying principals with some previously suggested models. The model describes how individuals in a population culturally acquire a vocabulary by actively participating in the acquisition process instead of passively observing and communicate through peer-to-peer interactions instead of vertical parent-offspring relations. Concretely, a notion of social/cultural learning called the naming game is first abstracted using learning theory. This abstraction defines the required cultural transmission mechanism for an evolutionary process. Second, the derived transmission system is expressed in terms of the well-known selection-mutation model defined in the context of evolutionary dynamics. In this way, the analogy between social learning and evolution at the level of meaning-word associations is made explicit. Although only horizontal and oblique transmission structures will be considered, extensions to vertical structures over different genetic generations can easily be incorporated. We provide a number of simplified experiments to clarify our reasoning.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Experimental non-classicality of an indivisible quantum system.
Lapkiewicz, Radek; Li, Peizhe; Schaeff, Christoph; Langford, Nathan K; Ramelow, Sven; Wieśniak, Marcin; Zeilinger, Anton
2011-06-22
In contrast to classical physics, quantum theory demands that not all properties can be simultaneously well defined; the Heisenberg uncertainty principle is a manifestation of this fact. Alternatives have been explored--notably theories relying on joint probability distributions or non-contextual hidden-variable models, in which the properties of a system are defined independently of their own measurement and any other measurements that are made. Various deep theoretical results imply that such theories are in conflict with quantum mechanics. Simpler cases demonstrating this conflict have been found and tested experimentally with pairs of quantum bits (qubits). Recently, an inequality satisfied by non-contextual hidden-variable models and violated by quantum mechanics for all states of two qubits was introduced and tested experimentally. A single three-state system (a qutrit) is the simplest system in which such a contradiction is possible; moreover, the contradiction cannot result from entanglement between subsystems, because such a three-state system is indivisible. Here we report an experiment with single photonic qutrits which provides evidence that no joint probability distribution describing the outcomes of all possible measurements--and, therefore, no non-contextual theory--can exist. Specifically, we observe a violation of the Bell-type inequality found by Klyachko, Can, Binicioğlu and Shumovsky. Our results illustrate a deep incompatibility between quantum mechanics and classical physics that cannot in any way result from entanglement.
The challenges of modelling antibody repertoire dynamics in HIV infection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shishi; Perelson, Alan S.
Antibody affinity maturation by somatic hypermutation of B-cell immunoglobulin variable region genes has been studied for decades in various model systems using well-defined antigens. While much is known about the molecular details of the process, our understanding of the selective forces that generate affinity maturation are less well developed, particularly in the case of a co-evolving pathogen such as HIV. Despite this gap in understanding, high-throughput antibody sequence data are increasingly being collected to investigate the evolutionary trajectories of antibody lineages in HIV-infected individuals. Here, we review what is known in controlled experimental systems about the mechanisms underlying antibody selectionmore » and compare this to the observed temporal patterns of antibody evolution in HIV infection. In addition, we describe how our current understanding of antibody selection mechanisms leaves questions about antibody dynamics in HIV infection unanswered. Without a mechanistic understanding of antibody selection in the context of a co-evolving viral population, modelling and analysis of antibody sequences in HIV-infected individuals will be limited in their interpretation and predictive ability.« less
The challenges of modelling antibody repertoire dynamics in HIV infection
Luo, Shishi; Perelson, Alan S.
2015-07-20
Antibody affinity maturation by somatic hypermutation of B-cell immunoglobulin variable region genes has been studied for decades in various model systems using well-defined antigens. While much is known about the molecular details of the process, our understanding of the selective forces that generate affinity maturation are less well developed, particularly in the case of a co-evolving pathogen such as HIV. Despite this gap in understanding, high-throughput antibody sequence data are increasingly being collected to investigate the evolutionary trajectories of antibody lineages in HIV-infected individuals. Here, we review what is known in controlled experimental systems about the mechanisms underlying antibody selectionmore » and compare this to the observed temporal patterns of antibody evolution in HIV infection. In addition, we describe how our current understanding of antibody selection mechanisms leaves questions about antibody dynamics in HIV infection unanswered. Without a mechanistic understanding of antibody selection in the context of a co-evolving viral population, modelling and analysis of antibody sequences in HIV-infected individuals will be limited in their interpretation and predictive ability.« less
Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.
Quality Improvement on the Acute Inpatient Psychiatry Unit Using the Model for Improvement
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
Background A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. Methods We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients—those starting or continuing on standing neuroleptics—with the Abnormal Involuntary Movement Scale (AIMS). Results After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Conclusion Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team. PMID:24052768
Quality improvement on the acute inpatient psychiatry unit using the model for improvement.
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients-those starting or continuing on standing neuroleptics-with the Abnormal Involuntary Movement Scale (AIMS). After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team.
Hollert, Henner; Crawford, Sarah E; Brack, Werner; Brinkmann, Markus; Fischer, Elske; Hartmann, Kai; Keiter, Steffen; Ottermanns, Richard; Ouellet, Jacob D; Rinke, Karsten; Rösch, Manfred; Roß-Nickoll, Martina; Schäffer, Andreas; Schüth, Christoph; Schulze, Tobias; Schwarz, Anja; Seiler, Thomas-Benjamin; Wessels, Martin; Hinderer, Matthias; Schwalb, Antje
2018-06-01
Lake ecosystems are sensitive recorders of environmental changes that provide continuous archives at annual to decadal resolution over thousands of years. The systematic investigation of land use changes and emission of pollutants archived in Holocene lake sediments as well as the reconstruction of contamination, background conditions, and sensitivity of lake systems offer an ideal opportunity to study environmental dynamics and consequences of anthropogenic impact that increasingly pose risks to human well-being. This paper discusses the use of sediment and other lines of evidence in providing a record of historical and current contamination in lake ecosystems. We present a novel approach to investigate impacts from human activities using chemical-analytical, bioanalytical, ecological, paleolimnological, paleoecotoxicological, archeological as well as modeling techniques. This multi-time slice weight-of-evidence (WOE) approach will generate knowledge on conditions prior to anthropogenic influence and provide knowledge to (i) create a better understanding of the effects of anthropogenic disturbances on biodiversity, (ii) assess water quality by using quantitative data on historical pollution and persistence of pollutants archived over thousands of years in sediments, and (iii) define environmental threshold values using modeling methods. This technique may be applied in order to gain insights into reference conditions of surface and ground waters in catchments with a long history of land use and human impact, which is still a major need that is currently not yet addressed within the context of the European Water Framework Directive. Copyright © 2018 Elsevier B.V. All rights reserved.
A conceptual holding model for veterinary applications.
Ferrè, Nicola; Kuhn, Werner; Rumor, Massimo; Marangon, Stefano
2014-05-01
Spatial references are required when geographical information systems (GIS) are used for the collection, storage and management of data. In the veterinary domain, the spatial component of a holding (of animals) is usually defined by coordinates, and no other relevant information needs to be interpreted or used for manipulation of the data in the GIS environment provided. Users trying to integrate or reuse spatial data organised in such a way, frequently face the problem of data incompatibility and inconsistency. The root of the problem lies in differences with respect to syntax as well as variations in the semantic, spatial and temporal representations of the geographic features. To overcome these problems and to facilitate the inter-operability of different GIS, spatial data must be defined according to a \\"schema\\" that includes the definition, acquisition, analysis, access, presentation and transfer of such data between different users and systems. We propose an application \\"schema\\" of holdings for GIS applications in the veterinary domain according to the European directive framework (directive 2007/2/EC--INSPIRE). The conceptual model put forward has been developed at two specific levels to produce the essential and the abstract model, respectively. The former establishes the conceptual linkage of the system design to the real world, while the latter describes how the system or software works. The result is an application \\"schema\\" that formalises and unifies the information-theoretic foundations of how to spatially represent a holding in order to ensure straightforward information-sharing within the veterinary community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, Larry K.; Allwine, K Jerry; Rutz, Frederick C.
2004-08-23
A new modeling system has been developed to provide a non-meteorologist with tools to predict air pollution transport in regions of complex terrain. This system couples the Penn State/NCAR Mesoscale Model 5 (MM5) with Earth Tech’s CALMET-CALPUFF system using a unique Graphical User Interface (GUI) developed at Pacific Northwest National Laboratory. This system is most useful in data-sparse regions, where there are limited observations to initialize the CALMET model. The user is able to define the domain of interest, provide details about the source term, and enter a surface weather observation through the GUI. The system then generates initial conditionsmore » and time constant boundary conditions for use by MM5. MM5 is run and the results are piped to CALPUFF for the dispersion calculations. Contour plots of pollutant concentration are prepared for the user. The primary advantages of the system are the streamlined application of MM5 and CALMET, limited data requirements, and the ability to run the coupled system on a desktop or laptop computer. In comparison with data collected as part of a field campaign, the new modeling system shows promise that a full-physics mesoscale model can be used in an applied modeling system to effectively simulate locally thermally-driven winds with minimal observations as input. An unexpected outcome of this research was how well CALMET represented the locally thermally-driven flows.« less
NASA Astrophysics Data System (ADS)
Missiaen, Jean-Michel; Raharijaona, Jean-Joël; Delannay, Francis
2016-11-01
A model is developed to compute the capillary pressure for the migration of the liquid phase out or into a uniform solid-liquid-vapor system. The capillary pressure is defined as the reduction of the overall interface energy per volume increment of the transferred fluid phase. The model takes into account the particle size of the solid particle aggregate, the packing configuration (coordination number, porosity), the volume fractions of the different phases, and the values of the interface energies in the system. The model is used for analyzing the stability of the composition profile during processing of W-Cu functionally graded materials combining a composition gradient with a particle size gradient. The migration pressure is computed with the model in two stages: (1) just after the melting of copper, i.e., when sintering and shape accommodation of the W particle aggregate can still be neglected and (2) at high temperature, when the system is close to full density with equilibrium particle shape. The model predicts well the different stages of liquid-phase migration observed experimentally.
Studying the flow dynamics of a karst aquifer system with an equivalent porous medium model.
Abusaada, Muath; Sauter, Martin
2013-01-01
The modeling of groundwater flow in karst aquifers is a challenge due to the extreme heterogeneity of its hydraulic parameters and the duality in their discharge behavior, that is, rapid response of highly conductive karst conduits and delayed drainage of the low-permeability fractured matrix after recharge events. There are a number of different modeling approaches for the simulation of the karst groundwater dynamics, applicable to different aquifer as well as modeling problem types, ranging from continuum models to double continuum models to discrete and hybrid models. This study presents the application of an equivalent porous model approach (EPM, single continuum model) to construct a steady-state numerical flow model for an important karst aquifer, that is, the Western Mountain Aquifer Basin (WMAB), shared by Israel and the West-Bank, using MODFLOW2000. The WMAB was used as a catchment since it is a well-constrained catchment with well-defined recharge and discharge components and therefore allows a control on the modeling approach, a very rare opportunity for karst aquifer modeling. The model demonstrates the applicability of equivalent porous medium models for the simulation of karst systems, despite their large contrast in hydraulic conductivities. As long as the simulated saturated volume is large enough to average out the local influence of karst conduits and as long as transport velocities are not an issue, EPM models excellently simulate the observed head distribution. The model serves as a starting basis that will be used as a reference for developing a long-term dynamic model for the WMAB, starting from the pre-development period (i.e., 1940s) up to date. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.
Manilal-Reddy, P I; Al-Jumaily, A M
2009-01-01
A continuous oscillatory positive airway pressure with pressure oscillations incidental to the mean airway pressure (bubble CPAP) is defined as a modified form of traditional continuous positive airway pressure (CPAP) delivery where pressure oscillations in addition to CPAP are administered to neonates with lung diseases. The mechanical effect of the pressure oscillations on lung performance is investigated by formulating mathematical models of a typical bubble CPAP device and a simple representation of a neonatal respiratory system. Preliminary results of the respiratory system's mechanical response suggest that bubble CPAP may improve lung performance by minimizing the respiratory system impedance and that the resonant frequency of the respiratory system may be a controlling factor. Additional steps in terms of clinical trials and a more complex respiratory system model are required to gain a deeper insight into the mechanical receptiveness of the respiratory system to pressure oscillations. However, the current results are promising in that they offer a deeper insight into the trends of variations that can be expected in future extended models as well as the model philosophies that need to be adopted to produce results that are compatible with experimental verification.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
NASA Astrophysics Data System (ADS)
Lu, Meilian; Yang, Dong; Zhou, Xing
2013-03-01
Based on the analysis of the requirements of conversation history storage in CPM (Converged IP Messaging) system, a Multi-views storage model and access methods of conversation history are proposed. The storage model separates logical views from physical storage and divides the storage into system managed region and user managed region. It simultaneously supports conversation view, system pre-defined view and user-defined view of storage. The rationality and feasibility of multi-view presentation, the physical storage model and access methods are validated through the implemented prototype. It proves that, this proposal has good scalability, which will help to optimize the physical data storage structure and improve storage performance.
Lifecycle assessment of microalgae to biofuel: Comparison of thermochemical processing pathways
Bennion, Edward P.; Ginosar, Daniel M.; Moses, John; ...
2015-01-16
Microalgae are currently being investigated as a renewable transportation fuel feedstock based on various advantages that include high annual yields, utilization of poor quality land, does not compete with food, and can be integrated with various waste streams. This study focuses on directly assessing the impact of two different thermochemical conversion technologies on the microalgae to biofuel process through life cycle assessment. A system boundary of a “well to pump” (WTP) is defined and includes sub-process models of the growth, dewatering, thermochemical bio-oil recovery, bio-oil stabilization, conversion to renewable diesel, and transport to the pump. Models were validated with experimentalmore » and literature data and are representative of an industrial-scale microalgae to biofuel process. Two different thermochemical bio-oil conversion systems are modeled and compared on a systems level, hydrothermal liquefaction (HTL) and pyrolysis. The environmental impact of the two pathways were quantified on the metrics of net energy ratio (NER), defined here as energy consumed over energy produced, and greenhouse gas (GHG) emissions. Results for WTP biofuel production through the HTL pathway were determined to be 1.23 for the NER and GHG emissions of -11.4 g CO 2-eq (MJ renewable diesel) -1. WTP biofuel production through the pyrolysis pathway results in a NER of 2.27 and GHG emissions of 210 g CO2 eq (MJ renewable diesel)-1. The large environmental impact associated with the pyrolysis pathway is attributed to feedstock drying requirements and combustion of co-products to improve system energetics. Discussion focuses on a detailed breakdown of the overall process energetics and GHGs, impact of modeling at laboratory- scale compared to industrial-scale, environmental impact sensitivity to engineering systems input parameters for future focused research and development and a comparison of results to literature.« less
NASA Astrophysics Data System (ADS)
Bennion, Edward P.
Microalgae are currently being investigated as a renewable transportation fuel feedstock based on various advantages that include high annual yields, utilization of poor quality land, does not compete with food, and can be integrated with various waste streams. This study focuses on directly assessing the impact of two different thermochemical conversion technologies on the microalgae-to-biofuel process through life cycle assessment. A system boundary of a "well to pump" (WTP) is defined and includes sub-process models of the growth, dewatering, thermochemical bio-oil recovery, bio-oil stabilization, conversion to renewable diesel, and transport to the pump. Models were validated with experimental and literature data and are representative of an industrial-scale microalgae-to-biofuel process. Two different thermochemical bio-oil conversion systems are modeled and compared on a systems level, hydrothermal liquefaction (HTL) and pyrolysis. The environmental impact of the two pathways were quantified on the metrics of net energy ratio (NER), defined here as energy consumed over energy produced, and greenhouse gas (GHG) emissions. Results for WTP biofuel production through the HTL pathway were determined to be 1.23 for the NER and GHG emissions of -11.4 g CO2 eq (MJ renewable diesel)-1. WTP biofuel production through the pyrolysis pathway results in a NER of 2.27 and GHG emissions of 210 g CO2 eq (MJ renewable diesel)-1. The large environmental impact associated with the pyrolysis pathway is attributed to feedstock drying requirements and combustion of co-products to improve system energetics. Discussion focuses on a detailed breakdown of the overall process energetics and GHGs, impact of modeling at laboratory-scale compared to industrial-scale, environmental impact sensitivity to engineering systems input parameters for future focused research and development, and a comparison of results to literature.
NASA Astrophysics Data System (ADS)
Stein, Olaf; Schultz, Martin G.; Rambadt, Michael; Saini, Rajveer; Hoffmann, Lars; Mallmann, Daniel
2017-04-01
Global model data of atmospheric composition produced by the Copernicus Atmospheric Monitoring Service (CAMS) is collected since 2010 at FZ Jülich and serves as boundary condition for use by Regional Air Quality (RAQ) modellers world-wide. RAQ models need time-resolved meteorological as well as chemical lateral boundary conditions for their individual model domains. While the meteorological data usually come from well-established global forecast systems, the chemical boundary conditions are not always well defined. In the past, many models used 'climatic' boundary conditions for the tracer concentrations, which can lead to significant concentration biases, particularly for tracers with longer lifetimes which can be transported over long distances (e.g. over the whole northern hemisphere) with the mean wind. The Copernicus approach utilizes extensive near-realtime data assimilation of atmospheric composition data observed from space which gives additional reliability to the global modelling data and is well received by the RAQ communities. An existing Web Coverage Service (WCS) for sharing these individually tailored model results is currently being re-engineered to make use of a modern, scalable database technology in order to improve performance, enhance flexibility, and allow the operation of catalogue services. The new Jülich Atmospheric Data Distributions Server (JADDS) adheres to the Web Coverage Service WCS2.0 standard as defined by the Open Geospatial Consortium OGC. This enables the user groups to flexibly define datasets they need by selecting a subset of chemical species or restricting geographical boundaries or the length of the time series. The data is made available in the form of different catalogues stored locally on our server. In addition, the Jülich OWS Interface (JOIN) provides interoperable web services allowing for easy download and visualization of datasets delivered from WCS servers via the internet. We will present the prototype JADDS server and address the major issues identified when relocating large four-dimensional datasets into a RASDAMAN raster array database. So far the RASDAMAN support for data available in netCDF format is limited with respect to metadata related to variables and axes. For community-wide accepted solutions, selected data coverages shall result in downloadable netCDF files including metadata complying with the netCDF CF Metadata Conventions standard (http://cfconventions.org/). This can be achieved by adding custom metadata elements for RASDAMAN bands (model levels) on data ingestion. Furthermore, an optimization strategy for ingestion of several TB of 4D model output data will be outlined.
NASA Astrophysics Data System (ADS)
Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang
2018-04-01
Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.
NASA Astrophysics Data System (ADS)
Suomalainen, Emilia; Erkman, Suren
Space life support systems can be taken as kinds of miniature models of industrial systems found on Earth. The term "industrial" is employed here in a generic sense, referring to all human technological activities. The time scale as well as the physical scope of space life support systems is reduced compared to most terrestrial systems and so is consequently their complexity. These systems can thus be used as a kind of a "laboratory of sustainability" to examine concerns related to the environmental sustainability of industrial systems and in particular to their resource use. Two air revitalisation systems, ARES and BIORAT, were chosen as the test cases of our study. They represent respectively a physico-chemical and a biological life support system. In order to analyse the sustainability of these systems, we began by constructing a generic system representation applicable to both these systems (and to others). The metabolism of the systems was analysed by performing Material Flow Analyses—MFA is a tool frequently employed on terrestrial systems in the field of industrial ecology. Afterwards, static simulation models were developed for both ARES and BIORAT, focusing, firstly, on the oxygen balances of the systems and, secondly, on the total mass balances. It was also necessary to define sustainability indicators adapted to space life support systems in order to evaluate and to compare the performances of ARES and BIORAT. The defined indicators were partly inspired from concepts used in Material Flow Accounting and they were divided into four broad categories: 1. recycling and material use efficiency, 2. autarky and coverage time, 3. resource use and waste creation, and 4. system mass and energy consumption. The preliminary results of our analyses show that the performance of BIORAT is superior compared to ARES in terms of the defined resource use indicators. BIORAT seems especially effective in reprocessing carbon dioxide created by human metabolism. The performances of ARES and BIORAT are somewhat closer in terms of material use efficiency and resource intensity. However, the excellence of BIORAT in terms of resource use is countered by the fact that its energy consumption is greater than that of ARES by a factor of ten.
The geometry of chaotic dynamics — a complex network perspective
NASA Astrophysics Data System (ADS)
Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.
2011-12-01
Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.
Consistent detection of global predicates
NASA Technical Reports Server (NTRS)
Cooper, Robert; Marzullo, Keith
1991-01-01
A fundamental problem in debugging and monitoring is detecting whether the state of a system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such detection, in general, ill-defined. Three algorithms are presented for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occurred in the system.
Kuo, Yin-Ming; Henry, Ryan A; Andrews, Andrew J
2016-01-01
Multiple substrate enzymes present a particular challenge when it comes to understanding their activity in a complex system. Although a single target may be easy to model, it does not always present an accurate representation of what that enzyme will do in the presence of multiple substrates simultaneously. Therefore, there is a need to find better ways to both study these enzymes in complicated systems, as well as accurately describe the interactions through kinetic parameters. This review looks at different methods for studying multiple substrate enzymes, as well as explores options on how to most accurately describe an enzyme's activity within these multi-substrate systems. Identifying and defining this enzymatic activity should help clear the way to using in vitro systems to accurately predicting the behavior of multi-substrate enzymes in vivo. This article is part of a Special Issue entitled: Physiological Enzymology and Protein Functions. Copyright © 2015. Published by Elsevier B.V.
Design, fabrication and test of a trace contaminant control system
NASA Technical Reports Server (NTRS)
1975-01-01
A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.
Elbahesh, Husni; Schughart, Klaus
2016-05-19
Influenza A viruses (IAV) are zoonotic pathogens that pose a major threat to human and animal health. Influenza virus disease severity is influenced by viral virulence factors as well as individual differences in host response. We analyzed gene expression changes in the blood of infected mice using a previously defined set of signature genes that was derived from changes in the blood transcriptome of IAV-infected human volunteers. We found that the human signature was reproduced well in the founder strains of the Collaborative Cross (CC) mice, thus demonstrating the relevance and importance of mouse experimental model systems for studying human influenza disease.
2010-01-01
Background Worksites are important locations for interventions to promote health. However, occupational programs with documented efficacy often are not used, and those being implemented have not been studied. The research in this report was funded through the American Reinvestment and Recovery Act Challenge Topic 'Pathways for Translational Research,' to define and prioritize determinants that enable and hinder translation of evidenced-based health interventions in well-defined settings. Methods The IGNITE (investigation to guide new insights for translational effectiveness) trial is a prospective cohort study of a worksite wellness and injury reduction program from adoption to final outcomes among 12 fire departments. It will employ a mixed methods strategy to define a translational model. We will assess decision to adopt, installation, use, and outcomes (reach, individual outcomes, and economic effects) using onsite measurements, surveys, focus groups, and key informant interviews. Quantitative data will be used to define the model and conduct mediation analysis of each translational phase. Qualitative data will expand on, challenge, and confirm survey findings and allow a more thorough understanding and convergent validity by overcoming biases in qualitative and quantitative methods used alone. Discussion Findings will inform worksite wellness in fire departments. The resultant prioritized influences and model of effective translation can be validated and manipulated in these and other settings to more efficiently move science to service. PMID:20932290
Fritz, M
1991-01-01
In order to define relationships between the vibration stress and the strain of the human hand-arm system a biomechanical model was developed. The four masses of the model representing the hand, the forearm and the upper arm were connected by dampers and springs in two perpendicular directions. Simulating muscle activity, damped torsion springs were included additionally. The motions of the model were described by a differential matrix equation which was solved by using a 'transfer matrix routine' as well as by numerical integration. Thus, functions with harmonic or transient time courses could be selected as an excitation. The simulated vibrations were compared with those of other hand-arm models. The forces and torques transmitted between the masses, and the energy dissipated by the dampers were computed for several combinations of exciter frequencies and accelerations. The dependence of torques upon excitation agreed fairly well with the behaviour of the arm muscles under vibration as described by various investigators. At frequencies above 100 Hz the energy was dissipated mainly by the dampers between the masses near to the exciter. Transferring this result to the hand-arm system it shows that at high frequencies energy is dissipated by the hand and its palmar tissues and this might be one cause for the incidence of vibration-induced white finger disease.
Power laws governing epidemics in isolated populations
NASA Astrophysics Data System (ADS)
Rhodes, C. J.; Anderson, R. M.
1996-06-01
TEMPORAL changes in the incidence of measles virus infection within large urban communities in the developed world have been the focus of much discussion in the context of the identification and analysis of nonlinear and chaotic patterns in biological time series1-11. In contrast, the measles records for small isolated island populations are highly irregular, because of frequent fade-outs of infection12-14, and traditional analysis15 does not yield useful insight. Here we use measurements of the distribution of epidemic sizes and duration to show that regularities in the dynamics of such systems do become apparent. Specifically, these biological systems are characterized by well-defined power laws in a manner reminiscent of other nonlinear, spatially extended dynamical systems in the physical sciences16-19. We further show that the observed power-law exponents are well described by a simple lattice-based model which reflects the social interaction between individual hosts.
NASA Technical Reports Server (NTRS)
Snyder, C. T.; Fry, E. B.; Drinkwater, F. J., III; Forrest, R. D.; Scott, B. C.; Benefield, T. D.
1972-01-01
A ground-based simulator investigation was conducted in preparation for and correlation with an-flight simulator program. The objective of these studies was to define minimum acceptable levels of static longitudinal stability for landing approach following stability augmentation systems failures. The airworthiness authorities are presently attempting to establish the requirements for civil transports with only the backup flight control system operating. Using a baseline configuration representative of a large delta wing transport, 20 different configurations, many representing negative static margins, were assessed by three research test pilots in 33 hours of piloted operation. Verification of the baseline model to be used in the TIFS experiment was provided by computed and piloted comparisons with a well-validated reference airplane simulation. Pilot comments and ratings are included, as well as preliminary tracking performance and workload data.
2012-12-01
system be implemented. In this study, we created a mathematical model to simulate accumulated savings under the proposed defined...retirement system be implemented. In this study, we created a mathematical model to simulate accumulated savings under the proposed defined...lumbering recovery, it has reemerged as a potential austerity measure within the U.S. government. B. METHODOLOGY We created a mathematical model of
NASA Astrophysics Data System (ADS)
Lee, En-Jui; Chen, Po
2017-04-01
More precise spatial descriptions of fault systems play an essential role in tectonic interpretations, deformation modeling, and seismic hazard assessments. The recent developed full-3D waveform tomography techniques provide high-resolution images and are able to image the material property differences across faults to assist the understanding of fault systems. In the updated seismic velocity model for Southern California, CVM-S4.26, many velocity gradients show consistency with surface geology and major faults defined in the Community Fault Model (CFM) (Plesch et al. 2007), which was constructed by using various geological and geophysical observations. In addition to faults in CFM, CVM-S4.26 reveals a velocity reversal mainly beneath the San Gabriel Mountain and Western Mojave Desert regions, which is correlated with the detachment structure that has also been found in other independent studies. The high-resolution tomographic images of CVM-S4.26 could assist the understanding of fault systems in Southern California and therefore benefit the development of fault models as well as other applications, such as seismic hazard analysis, tectonic reconstructions, and crustal deformation modeling.
A two-way interface between limited Systems Biology Markup Language and R.
Radivoyevitch, Tomas
2004-12-07
Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted.
A two-way interface between limited Systems Biology Markup Language and R
Radivoyevitch, Tomas
2004-01-01
Background Systems Biology Markup Language (SBML) is gaining broad usage as a standard for representing dynamical systems as data structures. The open source statistical programming environment R is widely used by biostatisticians involved in microarray analyses. An interface between SBML and R does not exist, though one might be useful to R users interested in SBML, and SBML users interested in R. Results A model structure that parallels SBML to a limited degree is defined in R. An interface between this structure and SBML is provided through two function definitions: write.SBML() which maps this R model structure to SBML level 2, and read.SBML() which maps a limited range of SBML level 2 files back to R. A published model of purine metabolism is provided in this SBML-like format and used to test the interface. The model reproduces published time course responses before and after its mapping through SBML. Conclusions List infrastructure preexisting in R makes it well-suited for manipulating SBML models. Further developments of this SBML-R interface seem to be warranted. PMID:15585059
DAndreas disease (angiomegaly): a currently well-defined nosological entitys.
Taurone, S; Spoletini, M; Di Matteo, F M; Mele, R; Tromba, L; Grippaudo, F R; Minni, A; Artico, M
2017-01-01
In 1997 DAndrea et al. described a new nosological entity the characteristics of which consisted of lengthening, dilation and tortuosity of blood vessels, arteries or veins, less prominent, but also less circumscribed than an aneurysm. This condition does not necessarily imply specific aneurysm formation although aneurysms at multiple sites are a frequent observation. The term used by authors for angiomegaly of the venous system was venomegaly and the analogous condition of the arterial system was termed arteriomegaly. Although tortuosity and dilation of arteries and veins have been widely reported, suggesting a systemic disorder which affects the structural integrity of all vessels, most papers dealing with this intriguing condition did not describe any alterations in the components of vessel walls. In the present paper, the authors describe a well-defined condition, DAndreas Disease (or DD, in this article), analyzing its salient morphological and clinical features and clarifying this pathological condition as a distinct and now well-defined nosological entity.
Integrals of motion for one-dimensional Anderson localized systems
Modak, Ranjan; Mukerjee, Subroto; Yuzbashyan, Emil A.; ...
2016-03-02
Anderson localization is known to be inevitable in one-dimension for generic disordered models. Since localization leads to Poissonian energy level statistics, we ask if localized systems possess ‘additional’ integrals of motion as well, so as to enhance the analogy with quantum integrable systems. Weanswer this in the affirmative in the present work. We construct a set of nontrivial integrals of motion for Anderson localized models, in terms of the original creation and annihilation operators. These are found as a power series in the hopping parameter. The recently found Type-1 Hamiltonians, which are known to be quantum integrable in a precisemore » sense, motivate our construction.Wenote that these models can be viewed as disordered electron models with infinite-range hopping, where a similar series truncates at the linear order.Weshow that despite the infinite range hopping, all states but one are localized.Wealso study the conservation laws for the disorder free Aubry–Andre model, where the states are either localized or extended, depending on the strength of a coupling constant.Weformulate a specific procedure for averaging over disorder, in order to examine the convergence of the power series. Using this procedure in the Aubry–Andre model, we show that integrals of motion given by our construction are well-defined in localized phase, but not so in the extended phase. Lastly, we also obtain the integrals of motion for a model with interactions to lowest order in the interaction.« less
Integrals of motion for one-dimensional Anderson localized systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modak, Ranjan; Mukerjee, Subroto; Yuzbashyan, Emil A.
Anderson localization is known to be inevitable in one-dimension for generic disordered models. Since localization leads to Poissonian energy level statistics, we ask if localized systems possess ‘additional’ integrals of motion as well, so as to enhance the analogy with quantum integrable systems. Weanswer this in the affirmative in the present work. We construct a set of nontrivial integrals of motion for Anderson localized models, in terms of the original creation and annihilation operators. These are found as a power series in the hopping parameter. The recently found Type-1 Hamiltonians, which are known to be quantum integrable in a precisemore » sense, motivate our construction.Wenote that these models can be viewed as disordered electron models with infinite-range hopping, where a similar series truncates at the linear order.Weshow that despite the infinite range hopping, all states but one are localized.Wealso study the conservation laws for the disorder free Aubry–Andre model, where the states are either localized or extended, depending on the strength of a coupling constant.Weformulate a specific procedure for averaging over disorder, in order to examine the convergence of the power series. Using this procedure in the Aubry–Andre model, we show that integrals of motion given by our construction are well-defined in localized phase, but not so in the extended phase. Lastly, we also obtain the integrals of motion for a model with interactions to lowest order in the interaction.« less
Integrals of motion for one-dimensional Anderson localized systems
NASA Astrophysics Data System (ADS)
Modak, Ranjan; Mukerjee, Subroto; Yuzbashyan, Emil A.; Shastry, B. Sriram
2016-03-01
Anderson localization is known to be inevitable in one-dimension for generic disordered models. Since localization leads to Poissonian energy level statistics, we ask if localized systems possess ‘additional’ integrals of motion as well, so as to enhance the analogy with quantum integrable systems. We answer this in the affirmative in the present work. We construct a set of nontrivial integrals of motion for Anderson localized models, in terms of the original creation and annihilation operators. These are found as a power series in the hopping parameter. The recently found Type-1 Hamiltonians, which are known to be quantum integrable in a precise sense, motivate our construction. We note that these models can be viewed as disordered electron models with infinite-range hopping, where a similar series truncates at the linear order. We show that despite the infinite range hopping, all states but one are localized. We also study the conservation laws for the disorder free Aubry-Andre model, where the states are either localized or extended, depending on the strength of a coupling constant. We formulate a specific procedure for averaging over disorder, in order to examine the convergence of the power series. Using this procedure in the Aubry-Andre model, we show that integrals of motion given by our construction are well-defined in localized phase, but not so in the extended phase. Finally, we also obtain the integrals of motion for a model with interactions to lowest order in the interaction.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1994-01-01
This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
A Distributed Snow Evolution Modeling System (SnowModel)
NASA Astrophysics Data System (ADS)
Liston, G. E.; Elder, K.
2004-12-01
A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... amending Section 11 to include a definition of ``System routing table,'' defined as the proprietary process... Locked and Crossed Market Rules. \\3\\ Nasdaq has previously defined the term ``System routing table'' in... better value is the essence of a well-functioning competitive marketplace. The Exchange also believes...
The Role of Intelligent Agents in Advanced Information Systems
NASA Technical Reports Server (NTRS)
Kerschberg, Larry
1999-01-01
In this presentation we review the current ongoing research within George Mason University's (GMU) Center for Information Systems Integration and Evolution (CISE). We define characteristics of advanced information systems, discuss a family of agents for such systems, and show how GMU's Domain modeling tools and techniques can be used to define a product line Architecture for configuring NASA missions. These concepts can be used to define Advanced Engineering Environments such as those envisioned for NASA's new initiative for intelligent design and synthesis environments.
Using a System Model for Irrigation Management
NASA Astrophysics Data System (ADS)
de Souza, Leonardo; de Miranda, Eu; Sánchez-Román, Rodrigo; Orellana-González, Alba
2014-05-01
When using Systems Thinking variables involved in any process have a dynamic behavior, according to nonstatic relationships with the environment. In this paper it is presented a system dynamics model developed to be used as an irrigation management tool. The model involves several parameters related to irrigation such as: soil characteristics, climate data and culture's physiological parameters. The water availability for plants in the soil is defined as a stock in the model, and this soil water content will define the right moment to irrigate and the water depth required to be applied. The crop water consumption will reduce soil water content; it is defined by the potential evapotranspiration (ET) that acts as an outflow from the stock (soil water content). ET can be estimated by three methods: a) FAO Penman-Monteith (ETPM), b) Hargreaves-Samani (ETHS) method, based on air temperature data and c) Class A pan (ETTCA) method. To validate the model were used data from the States of Ceará and Minas Gerais, Brazil, and the culture was bean. Keyword: System Dynamics, soil moisture content, agricultural water balance, irrigation scheduling.
NASA Astrophysics Data System (ADS)
Girard, C.; Rinaudo, J. D.; Caballero, Y.; Pulido-Velazquez, M.
2012-04-01
This article presents a case study which illustrates how an integrated hydro-economic model can be applied to optimize a program of measures (PoM) at the river basin level. By allowing the integration of hydrological, environmental and economic aspects at a local scale, this model is indeed useful to assist water policy decision making processes. The model identifies the least cost PoM to satisfy the predicted 2030 urban and agricultural water demands while meeting the in-stream flow constraints. The PoM mainly consists of water saving and conservation measures at the different demands. It includes as well some measures mobilizing additional water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The flow constraints are defined to ensure a good status of the surface water bodies, as defined by the EU Water Framework Directive (WFD). The case study is conducted in the Orb river basin, a coastal basin in Southern France. It faces a significant population growth, changes in agricultural patterns and limited water resources. It is classified at risk of not meeting the good status by 2015. Urban demand is calculated by type of water users at municipality level in 2006 and projected to 2030 with user specific scenarios. Agricultural water demand is estimated at irrigation district (canton) level in 2000 and projected to 2030 under three agricultural development scenarios. The total annual cost of each measure has been calculated taken into account operation and maintenance costs as well as investment cost. A first optimization model was developed using GAMS, General Algebraic Modeling System, applying Mixed Integer Linear Programming. The optimization is run to select the set of measures that minimizes the objective function, defined as the total cost of the applied measures, while meeting the demands and environmental constraints (minimum in-stream flows) for the 2030 time horizon. The first result is an optimized PoM on a drought year with a return period of five years, taken as a baseline scenario. A second step takes into account the impact of climate change on water demands and available resources. This allows decision makers to assess how the cost of the PoM evolves when the level of environmental constraints is increased or loosed, and so provides them a valuable input to understand the opportunity costs and trade-offs when defining environmental objectives for the long term, including also climate as a major factor of change. Finally, the model will be used on an extended hydrological time series to study costs and impacts of the PoM on the allocation of water resources. This will also allow the investigation of the uncertainties and the effect of risk aversion of decision makers and users on the system management, as well as the influence of the perfect foresight of deterministic optimization. ACKNOWLEDGEMENTS The study has been partially supported by the BRGM project Ouest-Hérault, the European Community 7th Framework Project GENESIS (n. 226536) on groundwater systems, and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (sub-projects CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Giving wellness a spiritual workout. Two model programs emphasize the development of self-reliance.
Seaward, B L
1989-04-01
Many hospitals, corporations, and communities have developed wellness programs to help people maintain a healthy life-style. Today's wellness doctrine reflects modern medical thinking that, to achieve optimal human potential, a person must enjoy physical, mental, emotional, and spiritual well-being. Most wellness programs, however, concentrate on the physical; few take into account spiritual well-being. Wellness programs developed by the Boulder County YMCA, Longmont, CO, and the University of Maryland, College Park, were based on an interpretation of a model suggested by Carl Jung that defines spirituality as the development of self-reliance. According to Jung, the constituents of self-reliance include awareness, inner faith, self-worth, humility, patience, acceptance, and self-confidence. He suggested that the absence of any one of these could cause a breakdown in a person's belief system and lead to a spiritual crisis. These programs trained participants to recognize the importance of their spiritual well-being and to find practical ways to improve it. Key components were a stress management course; workshops in confidence building and values clarification; and classes, lectures, and workshops that emphasize the integration of spiritual and physical well-being.
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
Modeling of enterprise information systems implementation: a preliminary investigation
NASA Astrophysics Data System (ADS)
Yusuf, Yahaya Y.; Abthorpe, M. S.; Gunasekaran, Angappa; Al-Dabass, D.; Onuh, Spencer
2001-10-01
The business enterprise has never been in greater need of Agility and the current trend will continue unabated well into the future. It is now recognized that information system is both the foundation and a necessary condition for increased responsiveness. A successful implementation of Enterprise Resource Planning (ERP) can help a company to move towards delivering on its competitive objectives as it enables suppliers to reach out to customers beyond the borders of traditional market defined by geography. The cost of implementation, even when it is successful, could be significant. Bearing in mind the potential strategic benefits, it is important that the implementation project is managed effectively. To this end a project cost model against which to benchmark ongoing project expenditure versus activities completed has been proposed in this paper.
Integrate-and-fire models with an almost periodic input function
NASA Astrophysics Data System (ADS)
Kasprzak, Piotr; Nawrocki, Adam; Signerska-Rynkowska, Justyna
2018-02-01
We investigate leaky integrate-and-fire models (LIF models for short) driven by Stepanov and μ-almost periodic functions. Special attention is paid to the properties of the firing map and its displacement, which give information about the spiking behavior of the considered system. We provide conditions under which such maps are well-defined and are uniformly continuous. We show that the LIF models with Stepanov almost periodic inputs have uniformly almost periodic displacements. We also show that in the case of μ-almost periodic drives it may happen that the displacement map is uniformly continuous, but is not μ-almost periodic (and thus cannot be Stepanov or uniformly almost periodic). By allowing discontinuous inputs, we extend some previous results, showing, for example, that the firing rate for the LIF models with Stepanov almost periodic input exists and is unique. This is a starting point for the investigation of the dynamics of almost-periodically driven integrate-and-fire systems.
Development of a Coarse-grained Model of Polypeptoids for Studying Self-assembly in Solution
NASA Astrophysics Data System (ADS)
Du, Pu; Rick, Steven; Kumar, Revati
Polypeptoid, a class of highly tunable biomimetic analogues of peptides, are used as a prototypical model system to study self-assembly. The focus of this work is to glean insight into the effect of electrostatic and other non-covalent secondary interactions on the self-assembly of sequence-defined polypeptoids, with different charged and uncharged side groups, in solution that will complement experiments. Atomistic (AA) molecular dynamics simulation can provide a complete description of self-assembly of polypeptoid systems. However, the long simulation length and time scales needed for these processes require the development of a computationally cheaper alternative, namely coarse-grained (CG) models. A CG model for studying polypeptoid micellar interactions is being developed, parameterized on atomistic simulations, using a hybridized approach involving the OPLS-UA force filed and the Stillinger-Weber (SW) potential form. The development of the model as well as the results from the simulations on the self-assembly as function of polypeptoid chemical structure and sequences will be presented.
On the use of Bayesian decision theory for issuing natural hazard warnings
NASA Astrophysics Data System (ADS)
Economou, T.; Stephenson, D. B.; Rougier, J. C.; Neal, R. A.; Mylne, K. R.
2016-10-01
Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.
Frame Shift/warp Compensation for the ARID Robot System
NASA Technical Reports Server (NTRS)
Latino, Carl D.
1991-01-01
The Automatic Radiator Inspection Device (ARID) is a system aimed at automating the tedious task of inspecting orbiter radiator panels. The ARID must have the ability to aim a camera accurately at the desired inspection points, which are in the order of 13,000. The ideal inspection points are known; however, the panel may be relocated due to inaccurate parking and warpage. A method of determining the mathematical description of a translated as well as a warped surface by accurate measurement of only a few points on this surface is developed here. The method uses a linear warp model whose effect is superimposed on the rigid body translation. Due to the angles involved, small angle approximations are possible, which greatly reduces the computational complexity. Given an accurate linear warp model, all the desired translation and warp parameters can be obtained by knowledge of the ideal locations of four fiducial points and the corresponding measurements of these points on the actual radiator surface. The method uses three of the fiducials to define a plane and the fourth to define the warp. Given this information, it is possible to determine a transformation that will enable the ARID system to translate any desired inspection point on the ideal surface to its corresponding value on the actual surface.
On the use of Bayesian decision theory for issuing natural hazard warnings.
Economou, T; Stephenson, D B; Rougier, J C; Neal, R A; Mylne, K R
2016-10-01
Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.
On the use of Bayesian decision theory for issuing natural hazard warnings
Stephenson, D. B.; Rougier, J. C.; Neal, R. A.; Mylne, K. R.
2016-01-01
Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings. PMID:27843399
Starn, J. Jeffrey; Stone, Janet Radway; Mullaney, John R.
2000-01-01
Contributing areas to public-supply wells at the Southbury Training School in Southbury, Connecticut, were mapped by simulating ground-water flow in stratified glacial deposits in the lower Transylvania Brook watershed. The simulation used nonlinear regression methods and informational statistics to estimate parameters of a ground-water flow model using drawdown data from an aquifer test. The goodness of fit of the model and the uncertainty associated with model predictions were statistically measured. A watershed-scale model, depicting large-scale ground-water flow in the Transylvania Brook watershed, was used to estimate the distribution of groundwater recharge. Estimates of recharge from 10 small basins in the watershed differed on the basis of the drainage characteristics of each basin. Small basins having well-defined stream channels contributed less ground-water recharge than basins having no defined channels because potential ground-water recharge was carried away in the stream channel. Estimates of ground-water recharge were used in an aquifer-scale parameter-estimation model. Seven variations of the ground-water-flow system were posed, each representing the ground-water-flow system in slightly different but realistic ways. The model that most closely reproduced measured hydraulic heads and flows with realistic parameter values was selected as the most representative of the ground-water-flow system and was used to delineate boundaries of the contributing areas. The model fit revealed no systematic model error, which indicates that the model is likely to represent the major characteristics of the actual system. Parameter values estimated during the simulation are as follows: horizontal hydraulic conductivity of coarse-grained deposits, 154 feet per day; vertical hydraulic conductivity of coarse-grained deposits, 0.83 feet per day; horizontal hydraulic conductivity of fine-grained deposits, 29 feet per day; specific yield, 0.007; specific storage, 1.6E-05. Average annual recharge was estimated using the watershed-scale model with no parameter estimation and was determined to be 24 inches per year in the valley areas and 9 inches per year in the upland areas. The parameter estimates produced in the model are similar to expected values, with two exceptions. The estimated specific yield of the stratified glacial deposits is lower than expected, which could be caused by the layered nature of the deposits. The recharge estimate produced by the model was also lower?about 32 percent of the average annual rate. This could be caused by the timing of the aquifer test with respect to the annual cycle of ground-water recharge, and by some of the expected recharge going to parts of the flow system that were not simulated. The data used in the calibration were collected during an aquifer test from October 30 to November 4, 1996. The model fit was very good, as indicated by the correlation coefficient (0.999) between the weighted simulated values and weighted observed values. The model also reproduced the general rise in ground-water levels caused by ground-water recharge and the cyclic fluctuations caused by pumping prior to the aquifer test. Contributing areas were delineated using a particle-tracking procedure. Hypothetical particles of water were introduced at each model cell in the top layer and were tracked to determine whether or not they reached the pumped well. A deterministic contributing area was calculated using the calibrated model, and a probabilistic contributing area was calculated using a Monte Carlo approach along with the calibrated model. The Monte Carlo simulation was done, using the parameter variance/covariance matrix generated by the regression model, to estimate probabilities associated with the contributing area to the wells. The probabilities arise from uncertainty in the estimated parameter values, which in turn arise from the adequacy of the data available to comprehensively describe the groundwater-flow sy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzo, Davinia B.; Blackburn, Mark R.
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Rizzo, Davinia B.; Blackburn, Mark R.
2018-03-30
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Process for Managing and Customizing HPC Operating Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David ML
2014-04-02
A process for maintaining a custom HPC operating system was developed at the Environmental Molecular Sciences Laboratory (EMSL) over the past ten years. This process is generic and flexible to manage continuous change as well as keep systems updated while managing communication through well defined pieces of software.
Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
META-GLARE: a shell for CIG systems.
Bottrighi, Alessio; Rubrichi, Stefania; Terenziani, Paolo
2015-01-01
In the last twenty years, many different approaches to deal with Computer-Interpretable clinical Guidelines (CIGs) have been developed, each one proposing its own representation formalism (mostly based on the Task-Network Model) execution engine. We propose META-GLARE a shell for easily defining new CIG systems. Using META-GLARE, CIG system designers can easily define their own systems (basically by defining their representation language), with a minimal programming effort. META-GLARE is thus a flexible and powerful vehicle for research about CIGs, since it supports easy and fast prototyping of new CIG systems.
Li, Dan; Li, Feng; Guttipatti, Pavithran; Song, Yuanquan
2018-05-05
The regrowth capacity of damaged neurons governs neuroregeneration and functional recovery after nervous system trauma. Over the past few decades, various intrinsic and extrinsic inhibitory factors involved in the restriction of axon regeneration have been identified. However, simply removing these inhibitory cues is insufficient for successful regeneration, indicating the existence of additional regulatory machinery. Drosophila melanogaster, the fruit fly, shares evolutionarily conserved genes and signaling pathways with vertebrates, including humans. Combining the powerful genetic toolbox of flies with two-photon laser axotomy/dendriotomy, we describe here the Drosophila sensory neuron - dendritic arborization (da) neuron injury model as a platform for systematically screening for novel regeneration regulators. Briefly, this paradigm includes a) the preparation of larvae, b) lesion induction to dendrite(s) or axon(s) using a two-photon laser, c) live confocal imaging post-injury and d) data analysis. Our model enables highly reproducible injury of single labeled neurons, axons, and dendrites of well-defined neuronal subtypes, in both the peripheral and central nervous system.
Cis-dicarbonyl binding at cobalt and iron porphyrins with saddle-shape conformation.
Seufert, Knud; Bocquet, Marie-Laure; Auwärter, Willi; Weber-Bargioni, Alexander; Reichert, Joachim; Lorente, Nicolás; Barth, Johannes V
2011-02-01
Diatomic molecules attached to complexed iron or cobalt centres are important in many biological processes. In natural systems, metallotetrapyrrole units carry respiratory gases or provide sensing and catalytic functions. Conceiving synthetic model systems strongly helps to determine the pertinent chemical foundations for such processes, with recent work highlighting the importance of the prosthetic groups' conformational flexibility as an intricate variable affecting their functional properties. Here, we present simple model systems to investigate, at the single molecule level, the interaction of carbon monoxide with saddle-shaped iron- and cobalt-porphyrin conformers, which have been stabilized as two-dimensional arrays on well-defined surfaces. Using scanning tunnelling microscopy we identified a novel bonding scheme expressed in tilted monocarbonyl and cis-dicarbonyl configurations at the functional metal-macrocycle unit. Modelling with density functional theory revealed that the weakly bonded diatomic carbonyl adduct can effectively bridge specific pyrrole groups with the metal atom as a result of the pronounced saddle-shape conformation of the porphyrin cage.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.
Modeling biochemical transformation processes and information processing with Narrator
Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-01-01
Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from . PMID:17389034
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
Enabling Rapid Naval Architecture Design Space Exploration
NASA Technical Reports Server (NTRS)
Mueller, Michael A.; Dufresne, Stephane; Balestrini-Robinson, Santiago; Mavris, Dimitri
2011-01-01
Well accepted conceptual ship design tools can be used to explore a design space, but more precise results can be found using detailed models in full-feature computer aided design programs. However, defining a detailed model can be a time intensive task and hence there is an incentive for time sensitive projects to use conceptual design tools to explore the design space. In this project, the combination of advanced aerospace systems design methods and an accepted conceptual design tool facilitates the creation of a tool that enables the user to not only visualize ship geometry but also determine design feasibility and estimate the performance of a design.
Bailey, Z.C.
1993-01-01
A comprehensive hydrologic investigation of the Jackson area in Madison County, Tennessee, was conducted to provide information for the development of a wellhead-protection program for two municipal well fields. The136-square-mile study area is between the Middle Fork Forked Deer and South Fork Forked Deer Rivers and includes the city of Jackson. The formations that underlie and crop out in the study area, in descending order, are the Memphis Sand, Fort Pillow Sand, and Porters Creek Clay. The saturated thickness of the Memphis Sand ranges from 0 to 270 feet; the Fort Pillow Sand, from 0 to 180 feet. The Porters Creek Clay, which ranges from 130 to 320 feet thick, separates a deeper formation, the McNairy Sand, from the shallower units. Estimates by other investigators of hydraulic conductivity for the Memphis Sand range from 80 to 202 feet per day. Estimates of transmissivity of the Memphis Sand range from 2,700 to 33,000 feet squared per day. Estimates of hydraulic conductivity for the Fort Pillow Sand range from 68 to 167 feet per day, and estimates of transmissivity of that unit range from 6,700 to 10,050 feet squared per day. A finite-difference, ground-water flow model was calibrated to steady-state hydrologic conditions of April 1989, and was used to simulate hypothetical pumping plans for the North and South Well Fields. The aquifers were represented as three layers in the model to simulate the ground-water flow system. Layer 1 is the saturated part of the Memphis Sand; layer 2 is the upper half of the Fort Pillow Sand; and layer 3 is the lower half of the Fort Pillow Sand. The steady-state water budget of the simulated system showed that more than half of the inflow to the ground-water system is underflow from the model boundaries. Most of this inflow is discharged as seepage to the rivers and to pumping wells. Slightly less than half of the inflow is from areal recharge and recharge from streams. About 75 percent of the discharge from the system is into the streams, lakes, and out of the model area through a small quantity of ground-water underflow. The remaining 25 percent is discharge to pumping wells. The calibrated model was modified to simulate the effects on the ground-water system of three hypothetical pumping plans that increased pumping from the North Well Field to up to 20 million gallons per day, and from the South Well Field, to up to 15 million gallons per day. Maximum drawdown resulting from the 20 million-gallons-per-day rate of simulated pumping was 44.7 feet in a node containing a pumping well, and maximum drawdown over an extended area was about 38 feet. Up to 34 percent of ground-water seepage to streams in the calibrated model was intercepted by pumping in the simulations. A maximum of 9 percent more water was induced through model boundaries. A particle-tracking program, MODPATH, was used to delineate areas contributing water to the North and South Well Fields for the calibrated model and the three pumping simulations, and to estimate distances for different times-of-travel to the wells. The size of the area contributing water to the North Well Field, defined by the 5-year time-of-travel capture zone, is about 0.8 by 1.8 miles for the calibrated model and pumping plan 1. The size of the area for pumping plan 2 is 1.1 by 2.0 miles and, for pumping plan 3, 1.6 by 2.2 miles. The range of distance for l-year time-of-travel to individual wells is 200 to 800 feet for the calibrated model and plan 1, and 350 to 950 feet for plans 2 and 3. The size of the area contributing water to the South Well Field, defined by the 5-year time-of-travel capture zone, is about 0.8 by 1.4 miles for the calibrated model. The size of the area for pumping plans 1 and 3 is 1.6 by 2.2 miles and, for pumping plan 2, 1.1 by 1.7 miles. The range of distance for l-year time-of-travel to individual wells is 120 to 530 feet for the calibrated model, 670 to 1,300 feet for pumping plans 1 and 3, and 260 to 850 feet
A Collective Study on Modeling and Simulation of Resistive Random Access Memory
NASA Astrophysics Data System (ADS)
Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen
2018-01-01
In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.
Accounting for care: Healthcare Resource Groups for paediatric critical care.
Murphy, Janet; Morris, Kevin
2008-02-01
Healthcare Resource Groups are a way of grouping patients in relation to the amount of healthcare resources they consume. They are the basis for implementation of Payment by Results by the Department of Health in England. An expert working group was set up to define a dataset for paediatric critical care that would in turn support the derivation of Healthcare Resource Groups. Three relevant classification systems were identified and tested with data from ten PICUs, including data about diagnoses, number of organ systems supported, interventions and nursing activity. Each PICU provided detailed costing for the financial year 2005/2006. Eighty-three per cent of PICU costs were found to be related to staff costs, with the largest cost being nursing costs. The Nursing Activity Score system was found to be a poor predictor of staff resource use, as was the adult HRG model based on the number of organ systems supported. It was decided to develop the HRGs based on a 'levels of care' approach; 32 data items were defined to support HRG allocation. From October 2007, data have been collected daily to identify the HRGs for each PICU patient and are being used by the Department of Health to estimate reference costs for PICU services. The data can also be used to support improved audit of PICU activity nationally as well as comparison of workload across different units and modelling of staff requirements within a unit.
The Peace Mediator effect: Heterogeneous agents can foster consensus in continuous opinion models
NASA Astrophysics Data System (ADS)
Vilone, Daniele; Carletti, Timoteo; Bagnoli, Franco; Guazzini, Andrea
2016-11-01
Statistical mechanics has proven to be able to capture the fundamental rules underlying phenomena of social aggregation and opinion dynamics, well studied in disciplines like sociology and psychology. This approach is based on the underlying paradigm that the interesting dynamics of multi-agent systems emerge from the correct definition of few parameters governing the evolution of each individual. In this context, we propose a particular model of opinion dynamics based on the psychological construct named ;cognitive dissonance;. Our system is made of interacting individuals, the agents, each bearing only two dynamical variables (respectively ;opinion; and ;affinity;) self-consistently adjusted during time evolution. We also define two special classes of interacting entities, both acting for a peace mediation process but via different course of action: ;diplomats; and ;auctoritates;. The behavior of the system with and without peace mediators (PMs) is investigated and discussed with reference to corresponding psychological and social implications.
Framework for Architecture Trade Study Using MBSE and Performance Simulation
NASA Technical Reports Server (NTRS)
Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas
2012-01-01
Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.
QMU as an approach to strengthening the predictive capabilities of complex models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.
2010-09-01
Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge andmore » relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third, Bayesian methods for optimal testing in the QMU framework were developed. This completion of this project represent an increased understanding of how to apply and use the QMU process as a means for improving model predictions of the behavior of complex systems. 4« less
Content-Addressable Memory Storage by Neural Networks: A General Model and Global Liapunov Method,
1988-03-01
point ex- ists. Liapunov functions were also described for Volterra -Lotka systems whose off-diagonal terms are relatively small (Kilmer, 1972...field, bidirectional associative memory, Volterra -Lotka, Gilpin-Ayala, and Eigen- Schuster models. The Cohen-Grossberg model thus defines a general...masking field, bidirectional associative memory. Volterra -Lotka, Gilpin-Ayala. and Eigen-Schuster models. The Cohen-Grossberg model thus defines a
Intelligent Processing Equipment Research Supported by the National Science Foundation
NASA Technical Reports Server (NTRS)
Rao, Suren B.
1992-01-01
The research in progress on processes, workstations, and systems has the goal of developing a high level of understanding of the issues involved. This will enable the incorporation of a level of intelligence that will allow the creation of autonomous manufacturing systems that operate in an optimum manner, under a wide range of conditions. The emphasis of the research has been on the development of highly productive and flexible techniques to address current and future problems in manufacturing and processing. Several of these projects have resulted in well-defined and established models that can now be implemented in the application arena in the next few years.
Multi-RCM ensemble downscaling of global seasonal forecasts (MRED)
NASA Astrophysics Data System (ADS)
Arritt, R. W.
2008-12-01
The Multi-RCM Ensemble Downscaling (MRED) project was recently initiated to address the question, Can regional climate models provide additional useful information from global seasonal forecasts? MRED will use a suite of regional climate models to downscale seasonal forecasts produced by the new National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) seasonal forecast system and the NASA GEOS5 system. The initial focus will be on wintertime forecasts in order to evaluate topographic forcing, snowmelt, and the potential usefulness of higher resolution, especially for near-surface fields influenced by high resolution orography. Each regional model will cover the conterminous US (CONUS) at approximately 32 km resolution, and will perform an ensemble of 15 runs for each year 1982-2003 for the forecast period 1 December - 30 April. MRED will compare individual regional and global forecasts as well as ensemble mean precipitation and temperature forecasts, which are currently being used to drive macroscale land surface models (LSMs), as well as wind, humidity, radiation, turbulent heat fluxes, which are important for more advanced coupled macro-scale hydrologic models. Metrics of ensemble spread will also be evaluated. Extensive analysis will be performed to link improvements in downscaled forecast skill to regional forcings and physical mechanisms. Our overarching goal is to determine what additional skill can be provided by a community ensemble of high resolution regional models, which we believe will eventually define a strategy for more skillful and useful regional seasonal climate forecasts.
A validation study of the simulation software gprMax by varying antenna stand-off height
NASA Astrophysics Data System (ADS)
Wilkinson, Josh; Davidson, Nigel
2018-04-01
The design and subsequent testing of suitable antennas and of complete ground-penetrating radar (GPR) systems can be both time consuming and expensive, with the need to understand the performance of a system in realistic environments of great importance to the end user. Through the use of suitably validated simulations, these costs could be significantly reduced, allowing an economical capability to be built which can accurately predict the performance of novel GPR antennas and existing commercial-off-the-shelf (COTS) systems in a user defined environment. This paper focuses on a preliminary validation of the open source software gprMax1 which features the ability to custom define antennas, targets, clutter objects and realistic heterogeneous soils. As an initial step in the assessment of the software, a comparison of the modelled response of targets buried in sand to experimental data has been undertaken, with the variation in response with antenna stand-off height investigated. This was conducted for both a simple bespoke bow-tie antenna design as well as for a Geophysical Survey Systems, Inc. (GSSI) commercial system,2 building upon previous work3 which explored the fidelity of gprMax in reproducing the S11 of simple antenna designs.
NASA Technical Reports Server (NTRS)
Charles, Steve; Williams, Roy
1989-01-01
Data describing the microsurgeon's hand dynamics was recorded and analyzed in order to provide an accurate model for the telemicrosurgery application of the Bimanual Telemicro-operation Test Bed. The model, in turn, will guide the development of algorithms for the control of robotic systems in bimanual telemicro-operation tasks. Measurements were made at the hand-tool interface and include position, acceleration and force between the tool-finger interface. Position information was captured using an orthogonal pulsed magnetic field positioning system resulting in measurements in all six degrees-of-freedom (DOF). Acceleration data at the hands was obtained using accelerometers positioned in a triaxial arrangement on the back of the hand allowing measurements in all three cartesian-coordinate axes. Force data was obtained by using miniature load cells positioned between the tool and the finger and included those forces experienced perpendicular to the tool shaft and those transferred from the tool-tissue site. Position data will provide a minimum/maximum reference frame for the robotic system's work space or envelope. Acceleration data will define the response times needed by the robotic system in order to emulate and subsequently outperform the human operator's tool movements. The force measurements will aid in designing a force-reflective, force-scaling system as well as defining the range of forces the robotic system will encounter. All analog data was acquired by a 16-channel analog-to-digital conversion system residing in a IBM PC/AT-compatible computer at the Center's laboratory. The same system was also used to analyze and present the data.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
Biomass Scenario Model Documentation: Data and References
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y.; Newes, E.; Bush, B.
2013-05-01
The Biomass Scenario Model (BSM) is a system dynamics model that represents the entire biomass-to-biofuels supply chain, from feedstock to fuel use. The BSM is a complex model that has been used for extensive analyses; the model and its results can be better understood if input data used for initialization and calibration are well-characterized. It has been carefully validated and calibrated against the available data, with data gaps filled in using expert opinion and internally consistent assumed values. Most of the main data sources that feed into the model are recognized as baseline values by the industry. This report documentsmore » data sources and references in Version 2 of the BSM (BSM2), which only contains the ethanol pathway, although subsequent versions of the BSM contain multiple conversion pathways. The BSM2 contains over 12,000 total input values, with 506 distinct variables. Many of the variables are opportunities for the user to define scenarios, while others are simply used to initialize a stock, such as the initial number of biorefineries. However, around 35% of the distinct variables are defined by external sources, such as models or reports. The focus of this report is to provide insight into which sources are most influential in each area of the supply chain.« less
Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)
NASA Astrophysics Data System (ADS)
Langenhorst, A. R.; Balaji, V.; Yakovlev, A.
2010-12-01
A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.
PACES: A Model of Student Well-Being
ERIC Educational Resources Information Center
Nelson, Mark D.; Tarabochia, Dawn W.; Koltz, Rebecca L.
2015-01-01
School counselors design, deliver, and evaluate comprehensive, developmental school counseling programs that are focused on enhancing student development and success. A model of student well-being, known as PACES, is defined and described that consists of five distinct and interactive domains: physical, affective, cognitive, economic, and social.…
Process modeling and bottleneck mining in online peer-review systems.
Premchaiswadi, Wichian; Porouhan, Parham
2015-01-01
This paper is divided into three main parts. In the first part of the study, we captured, collected and formatted an event log describing the handling of reviews for proceedings of an international conference in Thailand. In the second part, we used several process mining techniques in order to discover process models, social, organizational, and hierarchical structures from the proceeding's event log. In the third part, we detected the deviations and bottlenecks of the peer review process by comparing the observed events (i.e., authentic dataset) with a pre-defined model (i.e., master map). Finally, we investigated the performance information as well as the total waiting time in order to improve the effectiveness and efficiency of the online submission and peer review system for the prospective conferences and seminars. Consequently, the main goals of the study were as follows: (1) to convert the collected event log into the appropriate format supported by process mining analysis tools, (2) to discover process models and to construct social networks based on the collected event log, and (3) to find deviations, discrepancies and bottlenecks between the collected event log and the master pre-defined model. The results showed that although each paper was initially sent to three different reviewers; it was not always possible to make a decision after the first round of reviewing; therefore, additional reviewers were invited. In total, all the accepted and rejected manuscripts were reviewed by an average of 3.9 and 3.2 expert reviewers, respectively. Moreover, obvious violations of the rules and regulations relating to careless or inappropriate peer review of a manuscript-committed by the editorial board and other staff-were identified. Nine blocks of activity in the authentic dataset were not completely compatible with the activities defined in the master model. Also, five of the activity traces were not correctly enabled, and seven activities were missed within the online submission system. On the other hand, dealing with the feedback (comments) received from the first and the third reviewers; the conference committee members and the organizers did not attend to those feedback/comments in a timely manner.
Modeling functional neuroanatomy for an anatomy information system.
Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan
2008-01-01
Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
A Model of Workflow Composition for Emergency Management
NASA Astrophysics Data System (ADS)
Xin, Chen; Bin-ge, Cui; Feng, Zhang; Xue-hui, Xu; Shan-shan, Fu
The common-used workflow technology is not flexible enough in dealing with concurrent emergency situations. The paper proposes a novel model for defining emergency plans, in which workflow segments appear as a constituent part. A formal abstraction, which contains four operations, is defined to compose workflow segments under constraint rule. The software system of the business process resources construction and composition is implemented and integrated into Emergency Plan Management Application System.
A parsimonious land data assimilation system for the SMAP/GPM satellite era
USDA-ARS?s Scientific Manuscript database
Land data assimilation systems typically require complex parameterizations in order to: define required observation operators, quantify observing/forecasting errors and calibrate a land surface assimilation model. These parameters are commonly defined in an arbitrary manner and, if poorly specified,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernest A. Mancini
Characterization of stratigraphic sequences (T-R cycles or sequences) included outcrop studies, well log analysis and seismic reflection interpretation. These studies were performed by researchers at the University of Alabama, Wichita State University and McGill University. The outcrop, well log and seismic characterization studies were used to develop a depositional sequence model, a T-R cycle (sequence) model, and a sequence stratigraphy predictive model. The sequence stratigraphy predictive model developed in this study is based primarily on the modified T-R cycle (sequence) model. The T-R cycle (sequence) model using transgressive and regressive systems tracts and aggrading, backstepping, and infilling intervals or sectionsmore » was found to be the most appropriate sequence stratigraphy model for the strata in the onshore interior salt basins of the Gulf of Mexico to improve petroleum stratigraphic trap and specific reservoir facies imaging, detection and delineation. The known petroleum reservoirs of the Mississippi Interior and North Louisiana Salt Basins were classified using T-R cycle (sequence) terminology. The transgressive backstepping reservoirs have been the most productive of oil, and the transgressive backstepping and regressive infilling reservoirs have been the most productive of gas. Exploration strategies were formulated using the sequence stratigraphy predictive model and the classification of the known petroleum reservoirs utilizing T-R cycle (sequence) terminology. The well log signatures and seismic reflector patterns were determined to be distinctive for the aggrading, backstepping and infilling sections of the T-R cycle (sequence) and as such, well log and seismic data are useful for recognizing and defining potential reservoir facies. The use of the sequence stratigraphy predictive model, in combination with the knowledge of how the distinctive characteristics of the T-R system tracts and their subdivisions are expressed in well log patterns and seismic reflection configurations and terminations, improves the ability to identify and define the limits of potential stratigraphic traps and the stratigraphic component of combination stratigraphic and structural traps and the associated continental, coastal plain and marine potential reservoir facies. The assessment of the underdeveloped and undiscovered reservoirs and resources in the Mississippi Interior and North Louisiana Salt Basins resulted in the confirmation of the Monroe Uplift as a feature characterized by a major regional unconformity, which serves as a combination stratigraphic and structural trap with a significant stratigraphic component, and the characterization of a developing play in southwest Alabama, which involves a stratigraphic trap, located updip near the pinchout of the potential reservoir facies. Potential undiscovered and underdeveloped reservoirs in the onshore interior salt basins are identified as Jurassic and Cretaceous aggrading continental and coastal, backstepping nearshore marine and marine shelf, and infilling fluvial, deltaic, coastal plain and marine shelf.« less
E-learning: Web-based education.
Sajeva, Marco
2006-12-01
This review introduces state-of-the-art Web-based education and shows how the e-learning model can be applied to an anaesthesia department using Open Source solutions, as well as lifelong learning programs, which is happening in several European research projects. The definition of the term e-learning is still a work in progress due to the fact that technologies are evolving every day and it is difficult to improve teaching methodologies or to adapt traditional methods to a new or already existing educational model. The European Community is funding several research projects to define the new common market place for tomorrow's educational system; this is leading to new frontiers like virtual Erasmus inter-exchange programs based on e-learning. The first step when adapting a course to e-learning is to re-define the educational/learning model adopted: cooperative learning and tutoring are the two key concepts. This means that traditional lecture notes, books and exercises are no longer effective; teaching files must use rich multimedia content and have to be developed using the new media. This can lead to several pitfalls that can be avoided with an accurate design phase.
Quantum physics with non-Hermitian operators Quantum physics with non-Hermitian operators
NASA Astrophysics Data System (ADS)
Bender, Carl; Fring, Andreas; Günther, Uwe; Jones, Hugh
2012-11-01
The main motivation behind the call for this special issue was to gather recent results, developments and open problems in quantum physics with non-Hermitian operators. There have been previous special issues in this journal [1, 2] and elsewhere on this subject. The intention of this issue is to reflect the current state of this rapidly-developing field. It has therefore been open to all contributions containing new results on non-Hermitian theories that are explicitly PT-symmetric and/or pseudo-Hermitian or quasi-Hermitian. In the last decade these types of systems have proved to be viable self-consistent physical theories with well defined unitary time-evolution and real spectra. As the large number of responses demonstrates, this is a rapidly evolving field of research. A consensus has been reached regarding most of the fundamental problems, and the general ideas and techniques are now readily being employed in many areas of physics. Nonetheless, this issue still contains some treatments of a more general nature regarding the spectral analysis of these models, in particular, the physics of the exceptional points, the breaking of the PT-symmetry, an interpretation of negative energies and the consistent implementation of the WKB analysis. This issue also contains a treatment of a scattering theory associated with these types of systems, weak measurements, coherent states, decoherence, unbounded metric operators and the inclusion of domain issues to obtain well defined self-adjoint theories. Contributions in the form of applications of the general ideas include: studies of classical shock-waves and tunnelling, supersymmetric models, spin chain models, models with ring structure, random matrix models, the Pauli equation, the nonlinear Schrödinger equation, quasi-exactly solvable models, integrable models such as the Calogero model, Bose-Einstein condensates, thermodynamics, nonlinear oligomers, quantum catastrophes, the Landau-Zener problem and pseudo-Fermions. Applications close to experimental realization are proposed in optics, including short light pulse models, waveguides and laser systems, and also in electronics. We hope that this issue will become a valuable reference and inspiration for the broader scientific community working in mathematical and theoretical physics. References [1] Fring A, Jones H F and Znojil M (ed) 2008 J. Phys. A: Math. Theor. 41 240301 [2] Geyer H, Heiss D and Znojil M (ed) 2006 J. Phys. A: Math. Gen. 39 9963
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Leslie G.; Khare, Sangeeta; Lawhon, Sara D.
The aim of research on infectious diseases is their prevention, and brucellosis and salmonellosis as such are classic examples of worldwide zoonoses for application of a systems biology approach for enhanced rational vaccine development. When used optimally, vaccines prevent disease manifestations, reduce transmission of disease, decrease the need for pharmaceutical intervention, and improve the health and welfare of animals, as well as indirectly protecting against zoonotic diseases of people. Advances in the last decade or so using comprehensive systems biology approaches linking genomics, proteomics, bioinformatics, and biotechnology with immunology, pathogenesis and vaccine formulation and delivery are expected to enable enhancedmore » approaches to vaccine development. The goal of this paper is to evaluate the role of computational systems biology analysis of host:pathogen interactions (the interactome) as a tool for enhanced rational design of vaccines. Systems biology is bringing a new, more robust approach to veterinary vaccine design based upon a deeper understanding of the host pathogen interactions and its impact on the host's molecular network of the immune system. A computational systems biology method was utilized to create interactome models of the host responses to Brucella melitensis (BMEL), Mycobacterium avium paratuberculosis (MAP), Salmonella enterica Typhimurium (STM), and a Salmonella mutant (isogenic *sipA, sopABDE2) and linked to the basis for rational development of vaccines for brucellosis and salmonellosis as reviewed by Adams et al. and Ficht et al. [1,2]. A bovine ligated ileal loop biological model was established to capture the host gene expression response at multiple time points post infection. New methods based on Dynamic Bayesian Network (DBN) machine learning were employed to conduct a comparative pathogenicity analysis of 219 signaling and metabolic pathways and 1620 gene ontology (GO) categories that defined the host's biosignatures to each infectious condition. Through this DBN computational approach, the method identified significantly perturbed pathways and GO category groups of genes that define the pathogenicity signatures of the infectious agent. Our preliminary results provide deeper understanding of the overall complexity of host innate immune response as well as the identification of host gene perturbations that defines a unique host temporal biosignature response to each pathogen. The application of advanced computational methods for developing interactome models based on DBNs has proven to be instrumental in elucidating novel host responses and improved functional biological insight into the host defensive mechanisms. Evaluating the unique differences in pathway and GO perturbations across pathogen conditions allowed the identification of plausible host pathogen interaction mechanisms. Accordingly, a systems biology approach to study molecular pathway gene expression profiles of host cellular responses to microbial pathogens holds great promise as a methodology to identify, model and predict the overall dynamics of the host pathogen interactome. Thus, we propose that such an approach has immediate application to the rational design of brucellosis and salmonellosis vaccines.« less
Adams, L Garry; Khare, Sangeeta; Lawhon, Sara D; Rossetti, Carlos A; Lewin, Harris A; Lipton, Mary S; Turse, Joshua E; Wylie, Dennis C; Bai, Yu; Drake, Kenneth L
2011-09-22
The aim of research on infectious diseases is their prevention, and brucellosis and salmonellosis as such are classic examples of worldwide zoonoses for application of a systems biology approach for enhanced rational vaccine development. When used optimally, vaccines prevent disease manifestations, reduce transmission of disease, decrease the need for pharmaceutical intervention, and improve the health and welfare of animals, as well as indirectly protecting against zoonotic diseases of people. Advances in the last decade or so using comprehensive systems biology approaches linking genomics, proteomics, bioinformatics, and biotechnology with immunology, pathogenesis and vaccine formulation and delivery are expected to enable enhanced approaches to vaccine development. The goal of this paper is to evaluate the role of computational systems biology analysis of host:pathogen interactions (the interactome) as a tool for enhanced rational design of vaccines. Systems biology is bringing a new, more robust approach to veterinary vaccine design based upon a deeper understanding of the host-pathogen interactions and its impact on the host's molecular network of the immune system. A computational systems biology method was utilized to create interactome models of the host responses to Brucella melitensis (BMEL), Mycobacterium avium paratuberculosis (MAP), Salmonella enterica Typhimurium (STM), and a Salmonella mutant (isogenic ΔsipA, sopABDE2) and linked to the basis for rational development of vaccines for brucellosis and salmonellosis as reviewed by Adams et al. and Ficht et al. [1,2]. A bovine ligated ileal loop biological model was established to capture the host gene expression response at multiple time points post infection. New methods based on Dynamic Bayesian Network (DBN) machine learning were employed to conduct a comparative pathogenicity analysis of 219 signaling and metabolic pathways and 1620 gene ontology (GO) categories that defined the host's biosignatures to each infectious condition. Through this DBN computational approach, the method identified significantly perturbed pathways and GO category groups of genes that define the pathogenicity signatures of the infectious agent. Our preliminary results provide deeper understanding of the overall complexity of host innate immune response as well as the identification of host gene perturbations that defines a unique host temporal biosignature response to each pathogen. The application of advanced computational methods for developing interactome models based on DBNs has proven to be instrumental in elucidating novel host responses and improved functional biological insight into the host defensive mechanisms. Evaluating the unique differences in pathway and GO perturbations across pathogen conditions allowed the identification of plausible host-pathogen interaction mechanisms. Accordingly, a systems biology approach to study molecular pathway gene expression profiles of host cellular responses to microbial pathogens holds great promise as a methodology to identify, model and predict the overall dynamics of the host-pathogen interactome. Thus, we propose that such an approach has immediate application to the rational design of brucellosis and salmonellosis vaccines. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
Skin temperature increase mediated by wearable, long duration, low-intensity therapeutic ultrasound
NASA Astrophysics Data System (ADS)
Langer, Matthew D.; Huang, Wenyi; Ghanem, Angi; Guo, Yuan; Lewis, George K.
2017-03-01
One of the safety concerns with the delivery of therapeutic ultrasound is overheating of the transducer-skin interface due to poor or improper coupling. The objective of this research was to define a model that could be used to calculate the heating in the skin as a result of a novel, wearable long-duration ultrasound device. This model was used to determine that the maximum heating in the skin remained below the minimum threshold necessary to cause thermal injury over multiple hours of use. In addition to this model data, a human clinical study used wire thermocouples on the skin surface to measure heating characteristics during treatment with the sustained ultrasound system. Parametric analysis of the model determined that the maximum temperature increase is at the surface of the skin ranged from 40-41.8° C when perfusion was taken into account. The clinical data agreed well with the model predictions. The average steady state temperature observed across all 44 subjects was 40°C. The maximum temperature observed was less than 44° C, which is clinically safe for over 5 hours of human skin contact. The resultant clinical temperature data paired well with the model data suggesting the model can be used for future transducer and ultrasound system design simulation. As a result, the device was validated for thermal safety for typical users and use conditions.
Models for the indices of thermal comfort
Adrian, Streinu-Cercel; Sergiu, Costoiu; Maria, Mârza; Anca, Streinu-Cercel; Monica, Mârza
2008-01-01
The current paper propose the analysis and extension formulation required for establishing decision in the management of the medical national system from the point of view of quality and efficiency such as: conceiving models for the indices of thermal comfort, defining the predicted mean vote (on the thermal sensation scale) „PMV”, defining the metabolism „M”, heat transfer between the human body and the environment, defining the predicted percent of dissatisfied people „PPD”, defining all indices of thermal comfort. PMID:20108461
NASA Astrophysics Data System (ADS)
Valdes, Raymond
The characterization of thermal barrier coating (TBC) systems is increasingly important because they enable gas turbine engines to operate at high temperatures and efficiency. Phase of photothermal emission analysis (PopTea) has been developed to analyze the thermal behavior of the ceramic top-coat of TBCs, as a nondestructive and noncontact method for measuring thermal diffusivity and thermal conductivity. Most TBC allocations are on actively-cooled high temperature turbine blades, which makes it difficult to precisely model heat transfer in the metallic subsystem. This reduces the ability of rote thermal modeling to reflect the actual physical conditions of the system and can lead to higher uncertainty in measured thermal properties. This dissertation investigates fundamental issues underpinning robust thermal property measurements that are adaptive to non-specific, complex, and evolving system characteristics using the PopTea method. A generic and adaptive subsystem PopTea thermal model was developed to account for complex geometry beyond a well-defined coating and substrate system. Without a priori knowledge of the subsystem characteristics, two different measurement techniques were implemented using the subsystem model. In the first technique, the properties of the subsystem were resolved as part of the PopTea parameter estimation algorithm; and, the second technique independently resolved the subsystem properties using a differential "bare" subsystem. The confidence in thermal properties measured using the generic subsystem model is similar to that from a standard PopTea measurement on a "well-defined" TBC system. Non-systematic bias-error on experimental observations in PopTea measurements due to generic thermal model discrepancies was also mitigated using a regression-based sensitivity analysis. The sensitivity analysis reported measurement uncertainty and was developed into a data reduction method to filter out these "erroneous" observations. It was found that the adverse impact of bias-error can be greatly reduced, leaving measurement observations with only random Gaussian noise in PopTea thermal property measurements. Quantifying the influence of the coating-substrate interface in PopTea measurements is important to resolving the thermal conductivity of the coating. However, the reduced significance of this interface in thicker coating systems can give rise to large uncertainties in thermal conductivity measurements. A first step towards improving PopTea measurements for such circumstances has been taken by implementing absolute temperature measurements using harmonically-sustained two-color pyrometry. Although promising, even small uncertainties in thermal emission observations were found to lead to significant noise in temperature measurements. However, PopTea analysis on bulk graphite samples were able to resolve its thermal conductivity to the expected literature values.
Relaxation limit of a compressible gas-liquid model with well-reservoir interaction
NASA Astrophysics Data System (ADS)
Solem, Susanne; Evje, Steinar
2017-02-01
This paper deals with the relaxation limit of a two-phase compressible gas-liquid model which contains a pressure-dependent well-reservoir interaction term of the form q (P_r - P) where q>0 is the rate of the pressure-dependent influx/efflux of gas, P is the (unknown) wellbore pressure, and P_r is the (known) surrounding reservoir pressure. The model can be used to study gas-kick flow scenarios relevant for various wellbore operations. One extreme case is when the wellbore pressure P is largely dictated by the surrounding reservoir pressure P_r. Formally, this model is obtained by deriving the limiting system as the relaxation parameter q in the full model tends to infinity. The main purpose of this work is to understand to what extent this case can be represented by a well-defined mathematical model for a fixed global time T>0. Well-posedness of the full model has been obtained in Evje (SIAM J Math Anal 45(2):518-546, 2013). However, as the estimates for the full model are dependent on the relaxation parameter q, new estimates must be obtained for the equilibrium model to ensure existence of solutions. By means of appropriate a priori assumptions and some restrictions on the model parameters, necessary estimates (low order and higher order) are obtained. These estimates that depend on the global time T together with smallness assumptions on the initial data are then used to obtain existence of solutions in suitable Sobolev spaces.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
NASA Astrophysics Data System (ADS)
Bongartz, K.; Flügel, W. A.
2003-04-01
In the joint research project “Development of an integrated methodology for the sustainable management of river basins The Saale River Basin example”, coordinated by the Centre of Environmental Research (UFZ), concepts and tools for an integrated management of large river basins are developed and applied for the Saale river basin. The ultimate objective of the project is to contribute to the holistic assessment and benchmarking approaches in water resource planning, as required by the European Water Framework Directive. The study presented here deals (1) with the development of a river basin information and modelling system, (2) with the refinement of a regionalisation approach adapted for integrated basin modelling. The approach combines a user friendly basin disaggregation method preserving the catchment’s physiographic heterogeneity with a process oriented hydrological basin assessment for scale bridging integrated modelling. The well tested regional distribution concept of Response Units (RUs) will be enhanced by landscape metrics and decision support tools for objective, scale independent and problem oriented RU delineation to provide the spatial modelling entities for process oriented and distributed simulation of vertical and lateral hydrological transport processes. On basis of this RUs suitable hydrological modelling approaches will be further developed with strong respect to a more detailed simulation of the lateral surface and subsurface flows as well as the channel flow. This methodical enhancement of the well recognised RU-concept will be applied to the river basin of the Saale (Ac: 23 179 km2) and validated by a nested catchment approach, which allows multi-response-validation and estimation of uncertainties of the modelling results. Integrated modelling of such a complex basin strongly influenced by manifold human activities (reservoirs, agriculture, urban areas and industry) can only be achieved by coupling the various modelling approaches within a well defined model framework system. The latter is interactively linked with a sophisticated geo-relational database (DB) serving all research teams involved in the project. This interactive linkage is a core element comprising an object-oriented, internet-based modelling framework system (MFS) for building interdisciplinary modelling applications and offering different analysis and visualisation tools.
Object-oriented analysis and design of a health care management information system.
Krol, M; Reich, D L
1999-04-01
We have created a prototype for a universal object-oriented model of a health care system compatible with the object-oriented approach used in version 3.0 of the HL7 standard for communication messages. A set of three models has been developed: (1) the Object Model describes the hierarchical structure of objects in a system--their identity, relationships, attributes, and operations; (2) the Dynamic Model represents the sequence of operations in time as a collection of state diagrams for object classes in the system; and (3) functional Diagram represents the transformation of data within a system by means of data flow diagrams. Within these models, we have defined major object classes of health care participants and their subclasses, associations, attributes and operators, states, and behavioral scenarios. We have also defined the major processes and subprocesses. The top-down design approach allows use, reuse, and cloning of standard components.
Pulsar timing and general relativity
NASA Technical Reports Server (NTRS)
Backer, D. C.; Hellings, R. W.
1986-01-01
Techniques are described for accounting for relativistic effects in the analysis of pulsar signals. Design features of instrumentation used to achieve millisecond accuracy in the signal measurements are discussed. The accuracy of the data permits modeling the pulsar physical characteristics from the natural glitches in the emissions. Relativistic corrections are defined for adjusting for differences between the pulsar motion in its spacetime coordinate system relative to the terrestrial coordinate system, the earth's motion, and the gravitational potentials of solar system bodies. Modifications of the model to allow for a binary pulsar system are outlined, including treatment of the system as a point mass. Finally, a quadrupole model is presented for gravitational radiation and techniques are defined for using pulsars in the search for gravitational waves.
A heuristic mathematical model for the dynamics of sensory conflict and motion sickness
NASA Technical Reports Server (NTRS)
Oman, C. M.
1982-01-01
By consideration of the information processing task faced by the central nervous system in estimating body spatial orientation and in controlling active body movement using an internal model referenced control strategy, a mathematical model for sensory conflict generation is developed. The model postulates a major dynamic functional role for sensory conflict signals in movement control, as well as in sensory-motor adaptation. It accounts for the role of active movement in creating motion sickness symptoms in some experimental circumstance, and in alleviating them in others. The relationship between motion sickness produced by sensory rearrangement and that resulting from external motion disturbances is explicitly defined. A nonlinear conflict averaging model is proposed which describes dynamic aspects of experimentally observed subjective discomfort sensation, and suggests resulting behaviours. The model admits several possibilities for adaptive mechanisms which do not involve internal model updating. Further systematic efforts to experimentally refine and validate the model are indicated.
A Mitocentric View of Parkinson’s Disease
Haelterman, Nele A.; Yoon, Wan Hee; Sandoval, Hector; Jaiswal, Manish; Shulman, Joshua M.; Bellen, Hugo J.
2015-01-01
Parkinson’s disease (PD) is a common neurodegenerative disease, yet the underlying causative molecular mechanisms are ill defined. Numerous observations based on drug studies and mutations in genes that cause PD point to a complex set of rather subtle mitochondrial defects that may be causative. Indeed, intensive investigation of these genes in model organisms has revealed roles in the electron transport chain, mitochondrial protein homeostasis, mitophagy, and the fusion and fission of mitochondria. Here, we attempt to synthesize results from experimental studies in diverse systems to define the precise function of these PD genes, as well as their interplay with other genes that affect mitochondrial function. We propose that subtle mitochondrial defects in combination with other insults trigger the onset and progression of disease, in both familial and idiopathic PD. PMID:24821430
NASA Astrophysics Data System (ADS)
Bachmann, C. E.; Wiemer, S.; Woessner, J.; Hainzl, S.
2011-08-01
Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so-called traffic-light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre-defined alarm system by introducing a probability-based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time-varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non-exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5-km-deep well at high pressures. A six-sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic-light system, actions where implemented after an ML 2.7 event, the water injection was reduced and then stopped after another ML 2.5 event. A few hours later, an earthquake with ML 3.4, felt within the city, occurred, which led to bleed-off of the well. A risk study was later issued with the outcome that the experiment could not be resumed. We analyse the statistical features of the sequence and show that the sequence is well modelled with the Omori-Utsu law following the termination of water injection. Based on this model, the sequence will last 31+29/-14 years to reach the background level. We introduce statistical models based on Reasenberg and Jones and Epidemic Type Aftershock Sequence (ETAS) models, commonly used to model aftershock sequences. We compare and test different model setups to simulate the sequences, varying the number of fixed and free parameters. For one class of the ETAS models, we account for the flow rate at the injection borehole. We test the models against the observed data with standard likelihood tests and find the ETAS model accounting for the on flow rate to perform best. Such a model may in future serve as a valuable tool for designing probabilistic alarm systems for EGS experiments.
Using A Model-Based Systems Engineering Approach For Exploration Medical System Development
NASA Technical Reports Server (NTRS)
Hanson, A.; Mindock, J.; McGuire, K.; Reilly, J.; Cerro, J.; Othon, W.; Rubin, D.; Urbina, M.; Canga, M.
2017-01-01
NASA's Human Research Program's Exploration Medical Capabilities (ExMC) element is defining the medical system needs for exploration class missions. ExMC's Systems Engineering (SE) team will play a critical role in successful design and implementation of the medical system into exploration vehicles. The team's mission is to "Define, develop, validate, and manage the technical system design needed to implement exploration medical capabilities for Mars and test the design in a progression of proving grounds." Development of the medical system is being conducted in parallel with exploration mission architecture and vehicle design development. Successful implementation of the medical system in this environment will require a robust systems engineering approach to enable technical communication across communities to create a common mental model of the emergent engineering and medical systems. Model-Based Systems Engineering (MBSE) improves shared understanding of system needs and constraints between stakeholders and offers a common language for analysis. The ExMC SE team is using MBSE techniques to define operational needs, decompose requirements and architecture, and identify medical capabilities needed to support human exploration. Systems Modeling Language (SysML) is the specific language the SE team is utilizing, within an MBSE approach, to model the medical system functional needs, requirements, and architecture. Modeling methods are being developed through the practice of MBSE within the team, and tools are being selected to support meta-data exchange as integration points to other system models are identified. Use of MBSE is supporting the development of relationships across disciplines and NASA Centers to build trust and enable teamwork, enhance visibility of team goals, foster a culture of unbiased learning and serving, and be responsive to customer needs. The MBSE approach to medical system design offers a paradigm shift toward greater integration between vehicle and the medical system and directly supports the transition of Earth-reliant ISS operations to the Earth-independent operations envisioned for Mars. Here, we describe the methods and approach to building this integrated model.
A simple model of hysteresis behavior using spreadsheet analysis
NASA Astrophysics Data System (ADS)
Ehrmann, A.; Blachowicz, T.
2015-01-01
Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
Matlab code for inversion of frequency domain, electrostatic geophysical data in terms of scalar scattering amplitudes in the subsurface. The data is assumed to be the difference between two measurements: electric field measurements prior to the injection of an electrically conductive proppant, and the electric field measurements after proppant injection. The proppant is injected into the subsurface via a well, and its purpose is to prop open fractures created by hydraulic fracturing. In both cases the illuminating electric field is assumed to be a vertically incident plane wave. The inversion strategy is to solve a set of linear system ofmore » equations, where each equation defines the amplitude of a candidate scattering volume. The model space is defined by M potential scattering locations and the frequency domain (of which there are k frequencies) data are recorded on N receivers. The solution thus solves a kN x M system of linear equations for M scalar amplitudes within the user-defined solution space. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed to be scattered by subsurface proppant volumes. No field validation examples have so far been provided.« less
Tracing the Rationale Behind UML Model Change Through Argumentation
NASA Astrophysics Data System (ADS)
Jureta, Ivan J.; Faulkner, Stéphane
Neglecting traceability—i.e., the ability to describe and follow the life of a requirement—is known to entail misunderstanding and miscommunication, leading to the engineering of poor quality systems. Following the simple principles that (a) changes to UML model instances ought be justified to the stakeholders, (b) justification should proceed in a structured manner to ensure rigor in discussions, critique, and revisions of model instances, and (c) the concept of argument instantiated in a justification process ought to be well defined and understood, the present paper introduces the UML Traceability through Argumentation Method (UML-TAM) to enable the traceability of design rationale in UML while allowing the appropriateness of model changes to be checked by analysis of the structure of the arguments provided to justify such changes.
Using VCL as an Aspect-Oriented Approach to Requirements Modelling
NASA Astrophysics Data System (ADS)
Amálio, Nuno; Kelsen, Pierre; Ma, Qin; Glodt, Christian
Software systems are becoming larger and more complex. By tackling the modularisation of crosscutting concerns, aspect orientation draws attention to modularity as a means to address the problems of scalability, complexity and evolution in software systems development. Aspect-oriented modelling (AOM) applies aspect-orientation to the construction of models. Most existing AOM approaches are designed without a formal semantics, and use multi-view partial descriptions of behaviour. This paper presents an AOM approach based on the Visual Contract Language (VCL): a visual language for abstract and precise modelling, designed with a formal semantics, and comprising a novel approach to visual behavioural modelling based on design by contract where behavioural descriptions are total. By applying VCL to a large case study of a car-crash crisis management system, the paper demonstrates how modularity of VCL's constructs, at different levels of granularity, help to tackle complexity. In particular, it shows how VCL's package construct and its associated composition mechanisms are key in supporting separation of concerns, coarse-grained problem decomposition and aspect-orientation. The case study's modelling solution has a clear and well-defined modular structure; the backbone of this structure is a collection of packages encapsulating local solutions to concerns.
Dissipation and Rheology of Sheared Soft-Core Frictionless Disks Below Jamming
NASA Astrophysics Data System (ADS)
Vâgberg, Daniel; Olsson, Peter; Teitel, S.
2014-05-01
We use numerical simulations to investigate the effect that different models of energy dissipation have on the rheology of soft-core frictionless disks, below jamming in two dimensions. We find that it is not necessarily the mass of the particles that determines whether a system has Bagnoldian or Newtonian rheology, but rather the presence or absence of large connected clusters of particles. We demonstrate the key role that tangential dissipation plays in the formation of such clusters and in several models find a transition from Bagnoldian to Newtonian rheology as the packing fraction ϕ is varied. For each model, we show that appropriately scaled rheology curves approach a well defined limit as the mass of the particles decreases and collisions become strongly inelastic.
Modeling Impact of Urbanization in US Cities Using Simple Biosphere Model SiB2
NASA Technical Reports Server (NTRS)
Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert
2016-01-01
We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products, as well as climate drivers from Phase 2 of the North American Land Data Assimilation System (NLDAS-2) in a Simple Biosphere land surface model (SiB2) to assess the impact of urbanization in continental USA (excluding Alaska and Hawaii). More than 300 cities and their surrounding suburban and rural areas are defined in this study to characterize the impact of urbanization on surface climate including surface energy, carbon budget, and water balance. These analyses reveal an uneven impact of urbanization across the continent that should inform upon policy options for improving urban growth including heat mitigation and energy use, carbon sequestration and flood prevention.
NASA Astrophysics Data System (ADS)
Ferrara, R.; Leonardi, G.; Jourdan, F.
2013-09-01
A numerical model to predict train-induced vibrations is presented. The dynamic computation considers mutual interactions in vehicle/track coupled systems by means of a finite and discrete elements method. The rail defects and the case of out-of-round wheels are considered. The dynamic interaction between the wheel-sets and the rail is accomplished by using the non-linear Hertzian model with hysteresis damping. A sensitivity analysis is done to evaluate the variables affecting more the maintenance costs. The rail-sleeper contact is assumed extended to an area-defined contact zone, rather than a single-point assumption which fits better real case studies. Experimental validations show how prediction fits well experimental data.
Detecting future performance of the reservoirs under the changing climate
NASA Astrophysics Data System (ADS)
Biglarbeigi, Pardis; Strong, W. Alan; Griffiths, Philip
2017-04-01
Climate change is expected to affect the hydrological cycle resulting in changes in rainfall patterns, seasonal variations as well as flooding and drought. Also, changes in the hydrologic regime of the rivers are another anticipated effects of climate change. This climatic variability put pressure on renewable water resources with its increase in some regions, decrease in the others and high uncertainties in every region. As a result of the pressure of climate change on water resources, the operation of reservoir and dams is expected to experience uncertainties in different aspects such as supplying water and controlling the flood. In this study, we model two hypothetical dams on different streamflows, based on the water needs of 20'000 and 100'000 people. UK, as a country that suffered from several flooding events during the past years, and Iran, as a country with severe water scarcity, are chosen as the nations under study. For this study, the hypothetical modeled dam is located on three streamflows in each nation. Then, the mass-balance model of the system is optimised over 25 years of historical data, considering two objectives: 1) Minimisation of the water deficit in different sectors (agricultural, domestic and industrial) and 2) Minimisation of the flooding around the reservoir catchment. The optimised policies are simulated into the model again under different climate change and demographic scenarios to obtain the Resilience, Reliability and Vulnerability (RRV indices) of the system. In order to gain this goal, two different set of scenarios are introduced; the first set is the scenarios introduced in IPCC assessment in its Special Report on Emission Scenarios (SRES). The second set is introduced as a Monte Carlo simulation of demographic and temperature scenarios. Demographic scenarios are defined as the UN's estimation of population based on age, sex, fertility, mortality and migration rates with a 2-year frequency. Temperature scenarios, on the other hand, are defined based on the target of COP21, Paris which proposed to keep "the global temperature increase well below 2 degrees Celsius, while urging efforts to limit the increase to 1.5 degrees", as well as temperatures higher than this limit to better address the effects of climate change. Numerical results of the proposed model are anticipated to represent the performance of the system by the year 2100 through RRV indices. RRV metrices are effective means of quantitative estimation of climate change impacts on reservoir system in order to obtain the potential policies to solve the future water supply issues.
Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter
2013-01-01
The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7-8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.
Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.
2013-01-01
The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.
Direct approach for the fluctuation-dissipation theorem under nonequilibrium steady-state conditions
NASA Astrophysics Data System (ADS)
Komori, Kentaro; Enomoto, Yutaro; Takeda, Hiroki; Michimura, Yuta; Somiya, Kentaro; Ando, Masaki; Ballmer, Stefan W.
2018-05-01
The test mass suspensions of cryogenic gravitational-wave detectors such as the KAGRA project are tasked with extracting the heat deposited on the optics. These suspensions have a nonuniform temperature, requiring the calculation of thermal noise in nonequilibrium conditions. While it is not possible to describe the whole suspension system with one temperature, the local temperature at every point in the system is still well defined. We therefore generalize the application of the fluctuation-dissipation theorem to mechanical systems, pioneered by Saulson and Levin, to nonequilibrium conditions in which a temperature can only be defined locally. The result is intuitive in the sense that the thermal noise in the observed degree of freedom is given by averaging the temperature field, weighted by the dissipation density associated with that particular degree of freedom. After proving this theorem, we apply the result to examples of increasing complexity: a simple spring, the bending of a pendulum suspension fiber, and a model of the KAGRA cryogenic suspension. We conclude by outlining the application to nonequilibrium thermoelastic noise.
Hypergraph topological quantities for tagged social networks.
Zlatić, Vinko; Ghoshal, Gourab; Caldarelli, Guido
2009-09-01
Recent years have witnessed the emergence of a new class of social networks, which require us to move beyond previously employed representations of complex graph structures. A notable example is that of the folksonomy, an online process where users collaboratively employ tags to resources to impart structure to an otherwise undifferentiated database. In a recent paper, we proposed a mathematical model that represents these structures as tripartite hypergraphs and defined basic topological quantities of interest. In this paper, we extend our model by defining additional quantities such as edge distributions, vertex similarity and correlations as well as clustering. We then empirically measure these quantities on two real life folksonomies, the popular online photo sharing site Flickr and the bookmarking site CiteULike. We find that these systems share similar qualitative features with the majority of complex networks that have been previously studied. We propose that the quantities and methodology described here can be used as a standard tool in measuring the structure of tagged networks.
Hypergraph topological quantities for tagged social networks
NASA Astrophysics Data System (ADS)
Zlatić, Vinko; Ghoshal, Gourab; Caldarelli, Guido
2009-09-01
Recent years have witnessed the emergence of a new class of social networks, which require us to move beyond previously employed representations of complex graph structures. A notable example is that of the folksonomy, an online process where users collaboratively employ tags to resources to impart structure to an otherwise undifferentiated database. In a recent paper, we proposed a mathematical model that represents these structures as tripartite hypergraphs and defined basic topological quantities of interest. In this paper, we extend our model by defining additional quantities such as edge distributions, vertex similarity and correlations as well as clustering. We then empirically measure these quantities on two real life folksonomies, the popular online photo sharing site Flickr and the bookmarking site CiteULike. We find that these systems share similar qualitative features with the majority of complex networks that have been previously studied. We propose that the quantities and methodology described here can be used as a standard tool in measuring the structure of tagged networks.
NASA Astrophysics Data System (ADS)
Selvy, Brian M.; Claver, Charles; Willman, Beth; Petravick, Don; Johnson, Margaret; Reil, Kevin; Marshall, Stuart; Thomas, Sandrine; Lotz, Paul; Schumacher, German; Lim, Kian-Tat; Jenness, Tim; Jacoby, Suzanne; Emmons, Ben; Axelrod, Tim
2016-08-01
We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST's Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders' needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
NASA Astrophysics Data System (ADS)
Collow, Thomas W.; Wang, Wanqiu; Kumar, Arun; Zhang, Jinlun
2017-09-01
The capability of a numerical model to simulate the statistical characteristics of the summer sea ice date of retreat (DOR) and the winter date of advance (DOA) is investigated using sea ice concentration output from the Climate Forecast System Version 2 model (CFSv2). Two model configurations are tested, the operational setting (CFSv2CFSR) which uses initial data from the Climate Forecast System Reanalysis, and a modified version (CFSv2PIOMp) which ingests sea ice thickness initialization data from the Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS) and includes physics modifications for a more realistic representation of heat fluxes at the sea ice top and bottom. First, a method to define DOR and DOA is presented. Then, DOR and DOA are determined from the model simulations and observational sea ice concentration from the National Aeronautics and Space Administration (NASA). Means, trends, and detrended standard deviations of DOR and DOA are compared, along with DOR/DOA rates in the Arctic Ocean. It is found that the statistics are generally similar between the model and observations, although some regional biases exist. In addition, regions of new ice retreat in recent years are represented well in CFSv2PIOMp over the Arctic Ocean, in terms of both spatial extent and timing. Overall, CFSv2PIOMp shows a reduction in error throughout the Arctic. Based on results, it is concluded that the model produces a reasonable representation of the climatology and variability statistics of DOR and DOA in most regions. This assessment serves as a prerequisite for future predictability experiments.
Analysis of DIRAC's behavior using model checking with process algebra
NASA Astrophysics Data System (ADS)
Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof
2012-12-01
DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.
Defining the pharmaceutical system to support proactive drug safety.
Lewis, Vicki R; Hernandez, Angelica; Meadors, Margaret
2013-02-01
The military, aviation, nuclear, and transportation industries have transformed their safety records by using a systems approach to safety and risk mitigation. This article creates a preliminary model of the U.S. pharmaceutical system using available literature including academic publications, policies, and guidelines established by regulatory bodies and drug industry trade publications. Drawing from the current literature, the goals, roles, and individualized processes of pharmaceutical subsystems will be defined. Defining the pharmaceutical system provides a vehicle to assess and address known problems within the system, and provides a means to conduct proactive risk analyses, which would create significant pharmaceutical safety advancement.
NASA Astrophysics Data System (ADS)
Stanojević, A.; Marković, V. M.; Čupić, Ž.; Vukojević, V.; Kolar-Anić, L.
2017-12-01
A model was developed that can be used to study the effect of gradual cholesterol intake by food on the HPA axis dynamics. Namely, well defined oscillatory dynamics of vital neuroendocrine hypothalamic-pituitary-adrenal (HPA) axis has proven to be necessary for maintaining regular basal physiology and formulating appropriate stress response to various types of perturbations. Cholesterol, as a precursor of all steroid HPA axis hormones, can alter the dynamics of HPA axis. To analyse its particular influence on the HPA axis dynamics we used stoichiometric model of HPA axis activity, and simulate cholesterol perturbations in the form of finite duration pulses, with asymmetrically distributed concentration profile. Our numerical simulations showed that there is a complex, nonlinear dependence between the HPA axis responsiveness and different forms of applied cholesterol concentration pulses, indicating the significance of kinetic modelling, and dynamical systems theory for the understanding of large-scale self-regulatory, and homeostatic processes within this neuroendocrine system.
NASA Technical Reports Server (NTRS)
1975-01-01
An introduction to the MAPSEP organization and a detailed analytical description of all models and algorithms are given. These include trajectory and error covariance propagation methods, orbit determination processes, thrust modeling, and trajectory correction (guidance) schemes. Earth orbital MAPSEP contains the capability of analyzing almost any currently projected low thrust mission from low earth orbit to super synchronous altitudes. Furthermore, MAPSEP is sufficiently flexible to incorporate extended dynamic models, alternate mission strategies, and almost any other system requirement imposed by the user. As in the interplanetary version, earth orbital MAPSEP represents a trade-off between precision modeling and computational speed consistent with defining necessary system requirements. It can be used in feasibility studies as well as in flight operational support. Pertinent operational constraints are available both implicitly and explicitly. However, the reader should be warned that because of program complexity, MAPSEP is only as good as the user and will quickly succumb to faulty user inputs.
A role for low-order system dynamics models in urban health policy making.
Newell, Barry; Siri, José
2016-10-01
Cities are complex adaptive systems whose responses to policy initiatives emerge from feedback interactions between their parts. Urban policy makers must routinely deal with both detail and dynamic complexity, coupled with high levels of diversity, uncertainty and contingency. In such circumstances, it is difficult to generate reliable predictions of health-policy outcomes. In this paper we explore the potential for low-order system dynamics (LOSD) models to make a contribution towards meeting this challenge. By definition, LOSD models have few state variables (≤5), illustrate the non-linear effects caused by feedback and accumulation, and focus on endogenous dynamics generated within well-defined boundaries. We suggest that experience with LOSD models can help practitioners to develop an understanding of basic principles of system dynamics, giving them the ability to 'see with new eyes'. Because efforts to build a set of LOSD models can help a transdisciplinary group to develop a shared, coherent view of the problems that they seek to tackle, such models can also become the foundations of 'powerful ideas'. Powerful ideas are conceptual metaphors that provide the members of a policy-making group with the a priori shared context required for effective communication, the co-production of knowledge, and the collaborative development of effective public health policies. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.
1999-08-01
Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.
The use of algorithmic behavioural transfer functions in parametric EO system performance models
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.
2015-10-01
The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach and architecture are described in detail, and example results based on a practical application are then given which illustrate the performance benefits. Finally, conclusions are drawn and comments given regarding the benefits and uses of the new approach.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
The influence of pregnancy on systemic immunity.
Pazos, Michael; Sperling, Rhoda S; Moran, Thomas M; Kraus, Thomas A
2012-12-01
Adaptations in maternal systemic immunity are presumed to be responsible for observed alterations in disease susceptibility and severity as pregnancy progresses. Epidemiological evidence as well as animal studies have shown that influenza infections are more severe during the second and third trimesters of pregnancy, resulting in greater morbidity and mortality, although the reason for this is still unclear. Our laboratory has taken advantage of 20 years of experience studying the murine immune response to respiratory viruses to address questions of altered immunity during pregnancy. With clinical studies and unique animal model systems, we are working to define the mechanisms responsible for altered immune responses to influenza infection during pregnancy and what roles hormones such as estrogen or progesterone play in these alterations.
Programming a Detector Emulator on NI's FlexRIO Platform
NASA Astrophysics Data System (ADS)
Gervais, Michelle; Crawford, Christopher; Sprow, Aaron; Nab Collaboration
2017-09-01
Recently digital detector emulators have been on the rise as a means to test data acquisition systems and analysis toolkits from a well understood data set. National Instruments' PXIe-7962R FPGA module and Active Technologies AT-1212 DAC module provide a customizable platform for analog output. Using a graphical programming language, we have developed a system capable of producing two time-correlated channels of analog output which sample unique amplitude spectra to mimic nuclear physics experiments. This system will be used to model the Nab experiment, in which a prompt beta decay electron is followed by a slow proton according to a defined time distribution. We will present the results of our work and discuss further development potential. DOE under Contract DE-SC0008107.
Reflectivity of the atmosphere-inhomogeneous surfaces system Laboratory simulation
NASA Technical Reports Server (NTRS)
Mekler, Y.; Kaufman, Y. J.; Fraser, R. S.
1984-01-01
Theoretical two- and three-dimensional solutions of the radiative transfer equation have been applied to the earth-atmosphere system. Such solutions have not been verified experimentally. A laboratory experiment simulates such a system to test the theory. The atmosphere was simulated by latex spheres suspended in water and the ground by a nonuniform surface, half white and half black. A stable radiation source provided uniform illumination over the hydrosol. The upward radiance along a line orthogonal to the boundary of the two-halves field was recorded for different amounts of the hydrosol. The simulation is a well-defined radiative transfer experiment to test radiative transfer models involving nonuniform surfaces. Good agreement is obtained between the measured and theoretical results.
From the experience of development of composite materials with desired properties
NASA Astrophysics Data System (ADS)
Garkina, I. A.; Danilov, A. M.
2017-04-01
Using the experience in the development of composite materials with desired properties is given the algorithm of construction materials synthesis on the basis of their representation in the form of a complex system. The possibility of creation of a composite and implementation of the technical task originally are defined at a stage of cognitive modeling. On the basis of development of the cognitive map hierarchical structures of criteria of quality are defined; according to them for each allocated large-scale level the corresponding block diagrams of system are specified. On the basis of the solution of problems of one-criteria optimization with use of the found optimum values formalization of a multi-criteria task and its decision is carried out (the optimum organization and properties of system are defined). The emphasis is on methodological aspects of mathematical modeling (construction of a generalized and partial models to optimize the properties and structure of materials, including those based on the concept of systemic homeostasis).
Analytical basis for planetary quarantine.
NASA Technical Reports Server (NTRS)
Schalkowsky, S.; Kline, R. C., Jr.
1971-01-01
The attempt is made to investigate quarantine constraints, and alternatives for meeting them, in sufficient detail for identifying those courses of action which compromise neither the quarantine nor the space mission objectives. Mathematical models pertinent to this goal are formulated at three distinct levels. The first level of mission constraint models pertains to the quarantine goals considered necessary by the international scientific community. The principal emphasis of modeling at this level is to quantify international considerations and to produce well-defined mission constraints. Such constraints must be translated into explicit implementation requirements by the operational agency of the launching nation. This produces the second level of implementation system modeling. However, because of the multitude of factors entering into the implementation models, it is convenient to consider these factors at the third level of implementation parameter models. These models are intentionally limited to the inclusion of only those factors which can be quantified realistically, either now or in the near future.
Endocannabinoid system in neurodegenerative disorders.
Basavarajappa, Balapal S; Shivakumar, Madhu; Joshi, Vikram; Subbanna, Shivakumar
2017-09-01
Most neurodegenerative disorders (NDDs) are characterized by cognitive impairment and other neurological defects. The definite cause of and pathways underlying the progression of these NDDs are not well-defined. Several mechanisms have been proposed to contribute to the development of NDDs. These mechanisms may proceed concurrently or successively, and they differ among cell types at different developmental stages in distinct brain regions. The endocannabinoid system, which involves cannabinoid receptors type 1 (CB1R) and type 2 (CB2R), endogenous cannabinoids and the enzymes that catabolize these compounds, has been shown to contribute to the development of NDDs in several animal models and human studies. In this review, we discuss the functions of the endocannabinoid system in NDDs and converse the therapeutic efficacy of targeting the endocannabinoid system to rescue NDDs. © 2017 International Society for Neurochemistry.
Theory of constraints for publicly funded health systems.
Sadat, Somayeh; Carter, Michael W; Golden, Brian
2013-03-01
Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
Invasion of cooperators in lattice populations: linear and non-linear public good games.
Vásárhelyi, Zsóka; Scheuring, István
2013-08-01
A generalized version of the N-person volunteer's dilemma (NVD) Game has been suggested recently for illustrating the problem of N-person social dilemmas. Using standard replicator dynamics it can be shown that coexistence of cooperators and defectors is typical in this model. However, the question of how a rare mutant cooperator could invade a population of defectors is still open. Here we examined the dynamics of individual based stochastic models of the NVD. We analyze the dynamics in well-mixed and viscous populations. We show in both cases that coexistence between cooperators and defectors is possible; moreover, spatial aggregation of types in viscous populations can easily lead to pure cooperation. Furthermore we analyze the invasion of cooperators in populations consisting predominantly of defectors. In accordance with analytical results, in deterministic systems, we found the invasion of cooperators successful in the well-mixed case only if their initial concentration was higher than a critical threshold, defined by the replicator dynamics of the NVD. In the viscous case, however, not the initial concentration but the initial number determines the success of invasion. We show that even a single mutant cooperator can invade with a high probability, because the local density of aggregated cooperators exceeds the threshold defined by the game. Comparing the results to models using different benefit functions (linear or sigmoid), we show that the role of the benefit function is much more important in the well-mixed than in the viscous case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa
2014-01-01
A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or “elements” (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user’s environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs. PMID:24905070
Mohr, David C; Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa
2014-06-05
A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or "elements" (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user's environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs.
Application-Defined Decentralized Access Control
Xu, Yuanzhong; Dunn, Alan M.; Hofmann, Owen S.; Lee, Michael Z.; Mehdi, Syed Akbar; Witchel, Emmett
2014-01-01
DCAC is a practical OS-level access control system that supports application-defined principals. It allows normal users to perform administrative operations within their privilege, enabling isolation and privilege separation for applications. It does not require centralized policy specification or management, giving applications freedom to manage their principals while the policies are still enforced by the OS. DCAC uses hierarchically-named attributes as a generic framework for user-defined policies such as groups defined by normal users. For both local and networked file systems, its execution time overhead is between 0%–9% on file system microbenchmarks, and under 1% on applications. This paper shows the design and implementation of DCAC, as well as several real-world use cases, including sandboxing applications, enforcing server applications’ security policies, supporting NFS, and authenticating user-defined sub-principals in SSH, all with minimal code changes. PMID:25426493
Challenging Aerospace Problems for Intelligent Systems
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje; Kanashige, John; Satyadas, A.; Clancy, Daniel (Technical Monitor)
2002-01-01
In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.
Challenging Aerospace Problems for Intelligent Systems
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Kanashige, J.; Satyadas, A.
2003-01-01
In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.
NASA Astrophysics Data System (ADS)
Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng
2018-04-01
The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.
Model of formation of droplets during electric arc surfacing of functional coatings
NASA Astrophysics Data System (ADS)
Sarychev, Vladimir D.; Granovskii, Alexei Yu; Nevskii, Sergey A.; Gromov, Victor E.
2016-01-01
The mathematical model was developed for the initial stage of formation of an electrode metal droplet in the process of arc welding. Its essence lies in the fact that the presence of a temperature gradient in the boundary layer of the molten metal causes thermo-capillary instability, which leads to the formation of electrode metal droplets. A system of equations including Navier-Stokes equations, heat conduction and Maxwell's equations was solved as well as the boundary conditions for the system electrodes-plasma. Dispersion equation for thermo-capillary waves in the linear approximation for the plane layer was received and analyzed. The values of critical wavelengths, at which thermo-capillary instability appears in the nanometer wavelength range, were found. The parameters at which the mode of a fine-droplet transfer of the material takes place were theoretically defined.
A model for anomaly classification in intrusion detection systems
NASA Astrophysics Data System (ADS)
Ferreira, V. O.; Galhardi, V. V.; Gonçalves, L. B. L.; Silva, R. C.; Cansian, A. M.
2015-09-01
Intrusion Detection Systems (IDS) are traditionally divided into two types according to the detection methods they employ, namely (i) misuse detection and (ii) anomaly detection. Anomaly detection has been widely used and its main advantage is the ability to detect new attacks. However, the analysis of anomalies generated can become expensive, since they often have no clear information about the malicious events they represent. In this context, this paper presents a model for automated classification of alerts generated by an anomaly based IDS. The main goal is either the classification of the detected anomalies in well-defined taxonomies of attacks or to identify whether it is a false positive misclassified by the IDS. Some common attacks to computer networks were considered and we achieved important results that can equip security analysts with best resources for their analyses.
Sire, Clément
2004-09-24
We study the autocorrelation function of a conserved spin system following a quench at the critical temperature. Defining the correlation length L(t) approximately t(1/z), we find that for times t' and t satisfying L(t')
Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419
MBSE-Driven Visualization of Requirements Allocation and Traceability
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Wilkerson, Marcus
2016-01-01
In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.
NASA Astrophysics Data System (ADS)
Roman, D. R.
2017-12-01
In 2022, the National Geodetic Survey will replace all three NAD 83 reference frames the four new terrestrial reference frames. Each frame will be named after a tectonic plate (North American, Pacific, Caribbean and Mariana) and each will be related to the IGS frame through three Euler Pole parameters (EPPs). This talk will focus on practical application in the Caribbean region. A working group is being re-established for development of the North American region and will likely also result in analysis of the Pacific region as well. Both of these regions are adequately covered with existing CORS sites to model the EPPs. The Mariana region currently lacks sufficient coverage, but a separate project is underway to collect additional information to help in defining EPPs for that region at a later date. The Caribbean region has existing robust coverage through UNAVCO's COCONet and other data sets, but these require further analysis. This discussion will focus on practical examination of Caribbean sites to establish candidates for determining the Caribbean frame EPPs as well as an examination of any remaining velocities that might inform a model of the remaining velocities within that frame (Intra-Frame Velocity Model). NGS has a vested interest in defining such a model to meet obligations to U.S. citizens in Puerto Rico and the U.S. Virgin Islands. Beyond this, NGS aims to collaborate with other countries in the region through efforts with SIRGAS and UN-GGIM-Americas for a more acceptable regional model to serve everyone's needs.
NASA Astrophysics Data System (ADS)
Massoud, E. C.; Vrugt, J. A.
2015-12-01
Trees and forests play a key role in controlling the water and energy balance at the land-air surface. This study reports on the calibration of an integrated soil-tree-atmosphere continuum (STAC) model using Bayesian inference with the DREAM algorithm and temporal observations of soil moisture content, matric head, sap flux, and leaf water potential from the King's River Experimental Watershed (KREW) in the southern Sierra Nevada mountain range in California. Water flow through the coupled system is described using the Richards' equation with both the soil and tree modeled as a porous medium with nonlinear soil and tree water relationships. Most of the model parameters appear to be reasonably well defined by calibration against the observed data. The posterior mean simulation reproduces the observed soil and tree data quite accurately, but a systematic mismatch is observed between early afternoon measured and simulated sap fluxes. We will show how this points to a structural error in the STAC-model and suggest and test an alternative hypothesis for root water uptake that alleviates this problem.
A tissue-engineered subcutaneous pancreatic cancer model for antitumor drug evaluation.
He, Qingyi; Wang, Xiaohui; Zhang, Xing; Han, Huifang; Han, Baosan; Xu, Jianzhong; Tang, Kanglai; Fu, Zhiren; Yin, Hao
2013-01-01
The traditional xenograft subcutaneous pancreatic cancer model is notorious for its low incidence of tumor formation, inconsistent results for the chemotherapeutic effects of drug molecules of interest, and a poor predictive capability for the clinical efficacy of novel drugs. These drawbacks are attributed to a variety of factors, including inoculation of heterogeneous tumor cells from patients with different pathological histories, and use of poorly defined Matrigel(®). In this study, we aimed to tissue-engineer a pancreatic cancer model that could readily cultivate a pancreatic tumor derived from highly homogenous CD24(+)CD44(+) pancreatic cancer stem cells delivered by a well defined electrospun scaffold of poly(glycolide-co-trimethylene carbonate) and gelatin. The scaffold supported in vitro tumorigenesis from CD24(+)CD44(+) cancer stem cells for up to 7 days without inducing apoptosis. Moreover, CD24(+)CD44(+) cancer stem cells delivered by the scaffold grew into a native-like mature pancreatic tumor within 8 weeks in vivo and exhibited accelerated tumorigenesis as well as a higher incidence of tumor formation than the traditional model. In the scaffold model, we discovered that oxaliplatin-gemcitabine (OXA-GEM), a chemotherapeutic regimen, induced tumor regression whereas gemcitabine alone only capped tumor growth. The mechanistic study attributed the superior antitumorigenic performance of OXA-GEM to its ability to induce apoptosis of CD24(+)CD44(+) cancer stem cells. Compared with the traditional model, the scaffold model demonstrated a higher incidence of tumor formation and accelerated tumor growth. Use of a tiny population of highly homogenous CD24(+)CD44(+) cancer stem cells delivered by a well defined scaffold greatly reduces the variability associated with the traditional model, which uses a heterogeneous tumor cell population and poorly defined Matrigel. The scaffold model is a robust platform for investigating the antitumorigenesis mechanism of novel chemotherapeutic drugs with a special focus on cancer stem cells.
Animal models for osteoporosis
NASA Technical Reports Server (NTRS)
Turner, R. T.; Maran, A.; Lotinun, S.; Hefferan, T.; Evans, G. L.; Zhang, M.; Sibonga, J. D.
2001-01-01
Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge.
Validation of newly designed regional earth system model (RegESM) for Mediterranean Basin
NASA Astrophysics Data System (ADS)
Turuncoglu, Ufuk Utku; Sannino, Gianmaria
2017-05-01
We present a validation analysis of a regional earth system model system (RegESM) for the Mediterranean Basin. The used configuration of the modeling system includes two active components: a regional climate model (RegCM4) and an ocean modeling system (ROMS). To assess the performance of the coupled modeling system in representing the climate of the basin, the results of the coupled simulation (C50E) are compared to the results obtained by a standalone atmospheric simulation (R50E) as well as several observation datasets. Although there is persistent cold bias in fall and winter, which is also seen in previous studies, the model reproduces the inter-annual variability and the seasonal cycles of sea surface temperature (SST) in a general good agreement with the available observations. The analysis of the near-surface wind distribution and the main circulation of the sea indicates that the coupled model can reproduce the main characteristics of the Mediterranean Sea surface and intermediate layer circulation as well as the seasonal variability of wind speed and direction when it is compared with the available observational datasets. The results also reveal that the simulated near-surface wind speed and direction have poor performance in the Gulf of Lion and surrounding regions that also affects the large positive SST bias in the region due to the insufficient horizontal resolution of the atmospheric component of the coupled modeling system. The simulated seasonal climatologies of the surface heat flux components are also consistent with the CORE.2 and NOCS datasets along with the overestimation in net long-wave radiation and latent heat flux (or evaporation, E), although a large observational uncertainty is found in these variables. Also, the coupled model tends to improve the latent heat flux by providing a better representation of the air-sea interaction as well as total heat flux budget over the sea. Both models are also able to reproduce the temporal evolution of the inter-annual anomaly of surface air temperature and precipitation (P) over defined sub-regions. The Mediterranean water budget (E, P and E-P) estimates also show that the coupled model has high skill in the representation of water budget of the Mediterranean Sea. To conclude, the coupled model reproduces climatological land surface fields and the sea surface variables in the range of observation uncertainty and allow studying air-sea interaction and main regional climate characteristics of the basin.
Characterization of Strombolian events by using independent component analysis
NASA Astrophysics Data System (ADS)
Ciaramella, A.; de Lauro, E.; de Martino, S.; di Lieto, B.; Falanga, M.; Tagliaferri, R.
2004-10-01
We apply Independent Component Analysis (ICA) to seismic signals recorded at Stromboli volcano. Firstly, we show how ICA works considering synthetic signals, which are generated by dynamical systems. We prove that Strombolian signals, both tremor and explosions, in the high frequency band (>0.5 Hz), are similar in time domain. This seems to give some insights to the organ pipe model generation for the source of these events. Moreover, we are able to recognize in the tremor signals a low frequency component (<0.5 Hz), with a well defined peak corresponding to 30s.
Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system
NASA Technical Reports Server (NTRS)
Palmer, J. M.; Slater, P. N. (Principal Investigator)
1985-01-01
The effects of the atmosphere on propagating radiation must be known in order to calibrate an in orbit sensor using ground based measurements. A set of model atmosphere parameters, applicable to the White Sands (New Mexico) area is defined with particular attention given to those parameters which are required as input to the Herman Code. The radial size distribution, refractive index, vertical distribution, and visibility of aerosols are discussed as well as the molecular absorbers in the visible and near IR wavelength which produce strong absorption lines. Solar irradiance is also considered.
1993-01-01
by physical therapy or no treatment noted that many patients showed gradual recovery, especially if the insult resulted in only mild symptoms (1. 9...GOUP dysba rism, central nervous system, model, hyperbaric oxygen therapy 19. ABSTRACT (Continue on reverse if nece=A iwdýt4~b boknub7 NTIS CRA&I DIDIIC...oxygen therapy The effect of clinical severity and time to recompression with oxygen on outcome from spinal cord DCS is not well defined in the diving
Continuous theory of active matter systems with metric-free interactions.
Peshkov, Anton; Ngo, Sandrine; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-08-31
We derive a hydrodynamic description of metric-free active matter: starting from self-propelled particles aligning with neighbors defined by "topological" rules, not metric zones-a situation advocated recently to be relevant for bird flocks, fish schools, and crowds-we use a kinetic approach to obtain well-controlled nonlinear field equations. We show that the density-independent collision rate per particle characteristic of topological interactions suppresses the linear instability of the homogeneous ordered phase and the nonlinear density segregation generically present near threshold in metric models, in agreement with microscopic simulations.
Surface inspection system for carriage parts
NASA Astrophysics Data System (ADS)
Denkena, Berend; Acker, Wolfram
2006-04-01
Quality standards are very high in carriage manufacturing, due to the fact, that the visual quality impression is highly relevant for the purchase decision for the customer. In carriage parts even very small dents can be visible on the varnished and polished surface by observing reflections. The industrial demands are to detect these form errors on the unvarnished part. In order to meet the requirements, a stripe projection system for automatic recognition of waviness and form errors is introduced1. It bases on a modified stripe projection method using a high resolution line scan camera. Particular emphasis is put on achieving a short measuring time and a high resolution in depth, aiming at a reliable automatic recognition of dents and waviness of 10 μm on large curved surfaces of approximately 1 m width. The resulting point cloud needs to be filtered in order to detect dents. Therefore a spatial filtering technique is used. This works well on smoothly curved surfaces, if frequency parameters are well defined. On more complex parts like mudguards the method is restricted by the fact that frequencies near the define dent frequencies occur within the surface as well. To allow analysis of complex parts, the system is currently extended by including 3D CAD models into the process of inspection. For smoothly curved surfaces, the measuring speed of the prototype is mainly limited by the amount of light produced by the stripe projector. For complex surfaces the measuring speed is limited by the time consuming matching process. Currently, the development focuses on the improvement of the measuring speed.
Effects of flow on insulin fibril formation at an air/water interface
NASA Astrophysics Data System (ADS)
Posada, David; Heldt, Caryn; Sorci, Mirco; Belfort, Georges; Hirsa, Amir
2009-11-01
The amyloid fibril formation process, which is implicated in several diseases such as Alzheimer's and Huntington's, is characterized by the conversion of monomers to oligomers and then to fibrils. Besides well-studied factors such as pH, temperature and concentration, the kinetics of this process are significantly influenced by the presence of solid or fluid interfaces and by flow. By studying the nucleation and growth of a model system (insulin fibrils) in a well-defined flow field with an air/water interface, we can identify the flow conditions that impact protein aggregation kinetics both in the bulk solution and at the air/water interface. The present flow system (deep-channel surface viscometer) consists of an annular region bounded by stationary inner and outer cylinders, an air/water interface, and a floor driven at constant rotation. We show the effects of Reynolds number on the kinetics of the fibrillation process both in the bulk solution and at the air/water interface, as well as on the structure of the resultant amyloid aggregates.
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1980-01-01
The latest technical and programmatic developments are considered as well as expansions of the Rockwell SPS cost model covering each phase of the program through the year 2030. Comparative cost/economic analyses cover elements of the satellite, construction system, space transportation vehicles and operations, and the ground receiving station. System plans to define time phased costs and planning requirements that support major milestones through the year 2000. A special analysis is included on natural resources required to build the SPS reference configuration. An appendix contains the SPS Work Breakdown Structure and dictionary along with detail cost data sheet on each system and main element of the program. Over 200 line items address DDT&E, theoretical first unit, investment cost per satellite, and operations charges for replacement capital and normal operations and maintenance costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less
Meta II: Multi-Model Language Suite for Cyber Physical Systems
2013-03-01
AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
... some days and this does not appear to be an error in the modeling system''.\\2\\ \\2\\ Commenter referenced... modeling with a readily available modeling system (since construction of a complete modeling system from... from WildEarth Guardians. Comment No. 1--The commenter stated that EPA inappropriately defined the term...
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
CCTV Coverage Index Based on Surveillance Resolution and Its Evaluation Using 3D Spatial Analysis
Choi, Kyoungah; Lee, Impyeong
2015-01-01
We propose a novel approach to evaluating how effectively a closed circuit television (CCTV) system can monitor a targeted area. With 3D models of the target area and the camera parameters of the CCTV system, the approach produces surveillance coverage index, which is newly defined in this study as a quantitative measure for surveillance performance. This index indicates the proportion of the space being monitored with a sufficient resolution to the entire space of the target area. It is determined by computing surveillance resolution at every position and orientation, which indicates how closely a specific object can be monitored with a CCTV system. We present full mathematical derivation for the resolution, which depends on the location and orientation of the object as well as the geometric model of a camera. With the proposed approach, we quantitatively evaluated the surveillance coverage of a CCTV system in an underground parking area. Our evaluation process provided various quantitative-analysis results, compelling us to examine the design of the CCTV system prior to its installation and understand the surveillance capability of an existing CCTV system. PMID:26389909
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
Review of Soil Models and Their Implementation in Multibody System Algorithms
2012-02-01
models for use with ABAQUS . The constitutive models of the user defined materials can be programmed in the user subroutine UMAT. Many user defined...mechanical characteristics of mildly or moderately expansive unsaturated soils. As originally proposed by Alonso, utilizing a critical state framework...review of some of these programs is presented. ABAQUS ABAQUS is a popular FE analysis program that contains a wide variety of material models and
Roadmap for cardiovascular circulation model
Bradley, Christopher P.; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R.; Omholt, Stig W.; Chase, J. Geoffrey; Müller, Lucas O.; Watanabe, Sansuke M.; Blanco, Pablo J.; de Bono, Bernard; Hunter, Peter J.
2016-01-01
Abstract Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well‐established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo‐skeletal system. The computational infrastructure for the cardiovascular model should provide for near real‐time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. PMID:27506597
Queuing Models of Tertiary Storage
NASA Technical Reports Server (NTRS)
Johnson, Theodore
1996-01-01
Large scale scientific projects generate and use large amounts of data. For example, the NASA Earth Observation System Data and Information System (EOSDIS) project is expected to archive one petabyte per year of raw satellite data. This data is made automatically available for processing into higher level data products and for dissemination to the scientific community. Such large volumes of data can only be stored in robotic storage libraries (RSL's) for near-line access. A characteristic of RSL's is the use of a robot arm that transfers media between a storage rack and the read/write drives, thus multiplying the capacity of the system. The performance of the RSL's can be a critical limiting factor for the performance of the archive system. However, the many interacting components of an RSL make a performance analysis difficult. In addition, different RSL components can have widely varying performance characteristics. This paper describes our work to develop performance models of an RSL in isolation. Next we show how the RSL model can be incorporated into a queuing network model. We use the models to make some example performance studies of archive systems. The models described in this paper, developed for the NASA EODIS project, are implemented in C with a well defined interface. The source code, accompanying documentation, and also sample JAVA applets are available at: http://www.cis.ufl.edu/ted/
NASA Astrophysics Data System (ADS)
Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.
2018-07-01
Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.
The Dripping Handrail Model: Transient Chaos in Accretion Systems
NASA Technical Reports Server (NTRS)
Young, Karl; Scargle, Jeffrey D.; Cuzzi, Jeffrey (Technical Monitor)
1995-01-01
We define and study a simple dynamical model for accretion systems, the "dripping handrail" (DHR). The time evolution of this spatially extended system is a mixture of periodic and apparently random (but actually deterministic) behavior. The nature of this mixture depends on the values of its physical parameters - the accretion rate, diffusion coefficient, and density threshold. The aperiodic component is a special kind of deterministic chaos called transient chaos. The model can simultaneously exhibit both the quasiperiodic oscillations (QPO) and very low frequency noise (VLFN) that characterize the power spectra of fluctuations of several classes of accretion systems in astronomy. For this reason, our model may be relevant to many such astrophysical systems, including binary stars with accretion onto a compact object - white dwarf, neutron star, or black hole - as well as active galactic nuclei. We describe the systematics of the DHR's temporal behavior, by exploring its physical parameter space using several diagnostics: power spectra, wavelet "scalegrams," and Lyapunov exponents. In addition, we note that for large accretion rates the DHR has periodic modes; the effective pulse shapes for these modes - evaluated by folding the time series at the known period - bear a resemblance to the similarly- determined shapes for some x-ray pulsars. The pulsing observed in some of these systems may be such periodic-mode accretion, and not due to pure rotation as in the standard pulsar model.
A general method for radio spectrum efficiency defining
NASA Astrophysics Data System (ADS)
Ramadanovic, Ljubomir M.
1986-08-01
A general method for radio spectrum efficiency defining is proposed. Although simple it can be applied to various radio services. The concept of spectral elements, as information carriers, is introduced to enable the organization of larger spectral spaces - radio network models - characteristic for a particular radio network. The method is applied to some radio network models, concerning cellular radio telephone systems and digital radio relay systems, to verify its unified approach capability. All discussed radio services operate continuously.
Physical characteristics and evolutionary trends of continental rifts
NASA Technical Reports Server (NTRS)
Ramberg, I. B.; Morgan, P.
1984-01-01
Rifts may be defined as zones beneath which the entire lithosphere has ruptured in extension. They are widespread and occur in a variety of tectonic settings, and range up to 2,600 m.y. in age. The object of this review is to highlight characteristic features of modern and ancient rifts, to emphasize differences and similarities in order to help characterize evolutionary trends, to identify physical conditions favorable for initiation as well as termination of rifting, and to provide constraints for future modeling studies of rifting. Rifts are characterized on the basis of their structural, geomorphic, magmatic and geophysical features and the diverse character of these features and their evolutionary trends through time are discussed. Mechanisms of rifting are critically examined in terms of the physical characteristics and evolutionary trends of rifts, and it is concluded that while simple models can give valuable insight into specific processes of rifting, individual rifts can rarely, if ever, be characterized by well defined trends predicted by these models. More data are required to clearly define evolutionary trends, and the models require development to incorporate the effects of lithospheric heterogeneities and complex geologic histories.
On the pilot's behavior of detecting a system parameter change
NASA Technical Reports Server (NTRS)
Morizumi, N.; Kimura, H.
1986-01-01
The reaction of a human pilot, engaged in compensatory control, to a sudden change in the controlled element's characteristics is described. Taking the case where the change manifests itself as a variance change of the monitored signal, it is shown that the detection time, defined to be the time elapsed until the pilot detects the change, is related to the monitored signal and its derivative. Then, the detection behavior is modeled by an optimal controller, an optimal estimator, and a variance-ratio test mechanism that is performed for the monitored signal and its derivative. Results of a digital simulation show that the pilot's detection behavior can be well represented by the model proposed here.
Schmit, Alexandre; Salkin, Louis; Courbin, Laurent; Panizza, Pascal
2014-07-14
The combination of two drop makers such as flow focusing geometries or ┬ junctions is commonly used in microfluidics to fabricate monodisperse double emulsions and novel fluid-based materials. Here we investigate the physics of the encapsulation of small droplets inside large drops that is at the core of such processes. The number of droplets per drop studied over time for large sequences of consecutive drops reveals that the dynamics of these systems are complex: we find a succession of well-defined elementary patterns and defects. We present a simple model based on a discrete approach that predicts the nature of these patterns and their non-trivial scheme of arrangement in a sequence as a function of the ratio of the two timescales of the problem, the production times of droplets and drops. Experiments validate our model as they concur very well with predictions.
Long-term predictions of minewater geothermal systems heat resources
NASA Astrophysics Data System (ADS)
Harcout-Menou, Virginie; de ridder, fjo; laenen, ben; ferket, helga
2014-05-01
Abandoned underground mines usually flood due to the natural rise of the water table. In most cases the process is relatively slow giving the mine water time to equilibrate thermally with the the surrounding rock massif. Typical mine water temperature is too low to be used for direct heating, but is well suited to be combined with heat pumps. For example, heat extracted from the mine can be used during winter for space heating, while the process could be reversed during summer to provide space cooling. Altough not yet widely spread, the use of low temperature geothermal energy from abandoned mines has already been implemented in the Netherlands, Spain, USA, Germany and the UK. Reliable reservoir modelling is crucial to predict how geothermal minewater systems will react to predefined exploitation schemes and to define the energy potential and development strategy of a large-scale geothermal - cold/heat storage mine water systems. However, most numerical reservoir modelling software are developed for typical environments, such as porous media (a.o. many codes developed for petroleum reservoirs or groundwater formations) and cannot be applied to mine systems. Indeed, mines are atypical environments that encompass different types of flow, namely porous media flow, fracture flow and open pipe flow usually described with different modelling codes. Ideally, 3D models accounting for the subsurface geometry, geology, hydrogeology, thermal aspects and flooding history of the mine as well as long-term effects of heat extraction should be used. A new modelling approach is proposed here to predict the long-term behaviour of Minewater geothermal systems in a reactive and reliable manner. The simulation method integrates concepts for heat and mass transport through various media (e.g., back-filled areas, fractured rock, fault zones). As a base, the standard software EPANET2 (Rossman 1999; 2000) was used. Additional equations for describing heat flow through the mine (both through open pipes and from the rock massif) have been implemented. Among others, parametric methods are used to bypass some shortcomings in the physical models used for the subsurface. The advantage is that the complete geometry of the mine workings can be integrated and that computing is fast enough to allow implementing and testing several scenarios (e.g. contributions from fault zones, different assumptions about the actual status of shafts, drifts and mined out areas) in an efficient way (Ferket et al., 2011). EPANET allows to incorporate the full complexity of the subsurface mine structure. As a result, the flooded mine is considered as a network of pipes, each with a custom-defined diameter, length and roughness.
Putting the Library at Students' Fingertips
ERIC Educational Resources Information Center
Foley, Marianne
2012-01-01
The absence of a well-defined space for library resources within course-management systems has been well documented in library literature. Academic libraries have sought to remedy this deficiency in numerous ways. This article describes how a "library nugget," or module, was added to the Buffalo State College course-management system,…
NASA Astrophysics Data System (ADS)
Marseille, Gert-Jan; Stoffelen, Ad; Barkmeijer, Jan
2008-03-01
Lacking an established methodology to test the potential impact of prospective extensions to the global observing system (GOS) in real atmospheric cases we developed such a method, called Sensitivity Observing System Experiment (SOSE). For example, since the GOS is non uniform it is of interest to investigate the benefit of complementary observing systems filling its gaps. In a SOSE adjoint sensitivity structures are used to define a pseudo true atmospheric state for the simulation of the prospective observing system. Next, the synthetic observations are used together with real observations from the existing GOS in a state-of-the-art Numerical Weather Prediction (NWP) model to assess the potential added value of the new observing system. Unlike full observing system simulation experiments (OSSE), SOSE can be applied to real extreme events that were badly forecast operationally and only requires the simulation of the new instrument. As such SOSE is an effective tool, for example, to define observation requirements for extensions to the GOS. These observation requirements may serve as input for the design of an operational network of prospective observing systems. In a companion paper we use SOSE to simulate potential future space borne Doppler Wind Lidar (DWL) scenarios and assess their capability to sample meteorologically sensitive areas not well captured by the current GOS, in particular over the Northern Hemisphere oceans.
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
Mathematical Methods of System Analysis in Construction Materials
NASA Astrophysics Data System (ADS)
Garkina, Irina; Danilov, Alexander
2017-10-01
System attributes of construction materials are defined: complexity of an object, integrity of set of elements, existence of essential, stable relations between elements defining integrative properties of system, existence of structure, etc. On the basis of cognitive modelling (intensive and extensive properties; the operating parameters) materials (as difficult systems) and creation of the cognitive map the hierarchical modular structure of criteria of quality is under construction. It actually is a basis for preparation of the specification on development of material (the required organization and properties). Proceeding from a modern paradigm (model of statement of problems and their decisions) of development of materials, levels and modules are specified in structure of material. It when using the principles of the system analysis allows to considered technological process as the difficult system consisting of elements of the distinguished specification level: from atomic before separate process. Each element of system depending on an effective objective is considered as separate system with more detailed levels of decomposition. Among them, semantic and qualitative analyses of an object (are considered a research objective, decomposition levels, separate elements and communications between them come to light). Further formalization of the available knowledge in the form of mathematical models (structural identification) is carried out; communications between input and output parameters (parametrical identification) are defined. Hierarchical structures of criteria of quality are under construction for each allocated level. On her the relevant hierarchical structures of system (material) are under construction. Regularities of structurization and formation of properties, generally are considered at the levels from micro to a macrostructure. The mathematical model of material is represented as set of the models corresponding to private criteria by which separate modules and their levels (the mathematical description, a decision algorithm) are defined. Adequacy is established (compliance of results of modelling to experimental data; is defined by the level of knowledge of process and validity of the accepted assumptions). The global criterion of quality of material is considered as a set of private criteria (properties). Synthesis of material is carried out on the basis of one-criteria optimization on each of the chosen private criteria. Results of one-criteria optimization are used at multicriteria optimization. The methods of developing materials as single-purpose, multi-purpose, including contradictory, systems are indicated. The scheme of synthesis of composite materials as difficult systems is developed. The specified system approach effectively was used in case of synthesis of composite materials with special properties.
Interactive Tooth Separation from Dental Model Using Segmentation Field
2016-01-01
Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266
Grinter, David C.; Senanayake, Sanjaya D.; Flege, Jan Ingo
2016-11-15
Ceria is an important material for chemical conversion processes in catalysis. Its intrinsic properties as a reducible oxide can be exploited to achieve catalytic selectivity and activity. However, numerous phenomenological characteristics of ceria remain unknown and its active nature is ever slowly being unraveled. Well defined models of ceria (111) are an important way to systematically study these properties and take advantage of new in situ methods that require pristine materials that allow for the interrogation of the most fundamental traits of this material. The ceria-Ru(0001) model is now the most well studied model surface with numerous aspects of itsmore » preparation, atomic structure and reactivity studied by several groups. The preparation of CeO x structures oriented with a (111) surface termination can be achieved through molecular beam deposition, facilitating the growth of well-defined nanostructures, microparticles, and films on the Ru(0001) surface. The growth mechanism exploits the epitaxial relationship between CeOx and Ru to form a carpet mode of well oriented layers of Osingle bondCesingle bondO. These models can be studied to unravel the atomic structure and the oxidation state (Ce 4+ and Ce 3+), as prepared and under redox conditions (reduction/oxidation) or with reaction using reactants (e.g., H 2, methanol). Here, we present a discussion of these most recent observations pertaining to the growth mode, arrangement of atoms on the surface, characteristic chemical state, and redox chemistry of the CeO x-Ru surface. As a result, with insights from these studies we propose new strategies to further unravel the chemistry of ceria.« less
Development of a Water Recovery System Resource Tracking Model
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Sarguishm, Miriam; Shull, Sarah; Moore, Michael
2014-01-01
A simulation model has been developed to track water resources in an exploration vehicle using regenerative life support (RLS) systems. The model integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the model results in the RTM being a part of of a complete vehicle simulation that can be used in real time mission studies. Performance data for the variety of components in the RTM is focused on water processing and has been defined based on the most recent information available for the technology of the component. This paper will describe the process of defining the RLS system to be modeled and then the way the modeling environment was selected and how the model has been implemented. Results showing how the variety of RLS components exchange water are provided in a set of test cases.
The development of a classification system for maternity models of care.
Donnolley, Natasha; Butler-Henderson, Kerryn; Chapman, Michael; Sullivan, Elizabeth
2016-08-01
A lack of standard terminology or means to identify and define models of maternity care in Australia has prevented accurate evaluations of outcomes for mothers and babies in different models of maternity care. As part of the Commonwealth-funded National Maternity Data Development Project, a classification system was developed utilising a data set specification that defines characteristics of models of maternity care. The Maternity Care Classification System or MaCCS was developed using a participatory action research design that built upon the published and grey literature. The study identified the characteristics that differentiate models of care and classifies models into eleven different Major Model Categories. The MaCCS will enable individual health services, local health districts (networks), jurisdictional and national health authorities to make better informed decisions for planning, policy development and delivery of maternity services in Australia. © The Author(s) 2016.
An improved two-dimensional depth-integrated flow equation for rough-walled fractures
NASA Astrophysics Data System (ADS)
Mallikamas, Wasin; Rajaram, Harihar
2010-08-01
We present the development of an improved 2-D flow equation for rough-walled fractures. Our improved equation accounts for the influence of midsurface tortuosity and the fact that the aperture normal to the midsurface is in general smaller than the vertical aperture. It thus improves upon the well-known Reynolds equation that is widely used for modeling flow in fractures. Unlike the Reynolds equation, our approach begins from the lubrication approximation applied in an inclined local coordinate system tangential to the fracture midsurface. The local flow equation thus obtained is rigorously transformed to an arbitrary global Cartesian coordinate system, invoking the concepts of covariant and contravariant transformations for vectors defined on surfaces. Unlike previously proposed improvements to the Reynolds equation, our improved flow equation accounts for tortuosity both along and perpendicular to a flow path. Our approach also leads to a well-defined anisotropic local transmissivity tensor relating the representations of the flux and head gradient vectors in a global Cartesian coordinate system. We show that the principal components of the transmissivity tensor and the orientation of its principal axes depend on the directional local midsurface slopes. In rough-walled fractures, the orientations of the principal axes of the local transmissivity tensor will vary from point to point. The local transmissivity tensor also incorporates the influence of the local normal aperture, which is uniquely defined at each point in the fracture. Our improved flow equation is a rigorous statement of mass conservation in any global Cartesian coordinate system. We present three examples of simple geometries to compare our flow equation to analytical solutions obtained using the exact Stokes equations: an inclined parallel plate, and circumferential and axial flows in an incomplete annulus. The effective transmissivities predicted by our flow equation agree very well with values obtained using the exact Stokes equations in all these cases. We discuss potential limitations of our depth-integrated equation, which include the neglect of convergence/divergence and the inaccuracies implicit in any depth-averaging process near sharp corners where the wall and midsurface curvatures are large.
Exobiology and the search for biological signatures on Mars
NASA Technical Reports Server (NTRS)
Mancinelli, Rocco L.; Schwartz, Deborah E.
1988-01-01
In preparation for a Mars Rover/Sample return mission, the mission goals and objectives must be identified. One of the most important objectives must address exobiology and the question of the possibility of the origin and evolution of life on Mars. In particular, key signatures or bio-markers of a possible extinct Martian biota must be defined. To that end geographic locations (sites) that are likely to contain traces of past life must also be identified. Sites and experiments are being defined in support of a Mars rover sample return mission. In addition, analyses based on computer models of abiotic processes of CO2 loss from Mars suggest that the CO2 from the atmosphere may have precipitated as carbonates and be buried within the Martian regolith. The carbon cycle of perennially frozen lakes in the dry valley of Antarctica are currently being investigated. These lakes were purported to be a model system for the ancient Martian lakes. By understanding the dynamic balance between the abiotic vs. biotic cycling of carbon within this system, information is gathered which will enable the interpretation of data obtained by a Mars rover with respect to possible carbonate deposits and the processing of carbon by biological systems. These ancient carbonate deposits, and other sedimentary units would contain traces of biological signatures that would hold the key to understanding the origin and evolution of life on Mars, as well as Earth.
Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models
NASA Astrophysics Data System (ADS)
Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.
2016-12-01
Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.
Signal velocity in oscillator arrays
NASA Astrophysics Data System (ADS)
Cantos, C. E.; Veerman, J. J. P.; Hammond, D. K.
2016-09-01
We investigate a system of coupled oscillators on the circle, which arises from a simple model for behavior of large numbers of autonomous vehicles where the acceleration of each vehicle depends on the relative positions and velocities between itself and a set of local neighbors. After describing necessary and sufficient conditions for asymptotic stability, we derive expressions for the phase velocity of propagation of disturbances in velocity through this system. We show that the high frequencies exhibit damping, which implies existence of well-defined signal velocitiesc+ > 0 and c- < 0 such that low frequency disturbances travel through the flock as f+(x - c+t) in the direction of increasing agent numbers and f-(x - c-t) in the other.
Assessment of geothermal resources at Newcastle, Utah
Blackett, Robert E.; Shubat, Michael A.; Chapman, David S.; Forster, Craig B.; Schlinger, Charles M.
1989-01-01
Integrated geology, geophysics, and geochemistry studies in the Newcastle area of southwest Utah are used to develop a conceptual geologic model of a blind, moderate-temperature hydrothermal system. Studies using 12 existing and 12 new, thermal gradient test holes, in addition to geologic mapping, gravity surveys, and other investigations have helped define the thermal regime. Preliminary results indicate that the up-flow region is located near the west-facing escarpment of an adjacent mountain range, probably related to the bounding range-front fault. Chemical geothermometers suggest equilibration temperatures ranging from 140??C to 170??C. The highest temperature recorded in the system is 130??C from an exploration well drilled by the Unocal Corporation.
Hydrogeologic Framework in Three Drainage Basins in the New Jersey Pinelands, 2004-06
Walker, Richard L.; Reilly, Pamela A.; Watson, Kara M.
2008-01-01
The U.S. Geological Survey, in cooperation with the New Jersey Pinelands Commission, began a multi-phase hydrologic investigation in 2004 to characterize the hydrologic system supporting the aquatic and wetland communities of the New Jersey Pinelands area (Pinelands). The Pinelands is an ecologically diverse area in the southern New Jersey Coastal Plain underlain by the Kirkwood-Cohansey aquifer system. The demand for ground water from this aquifer system is increasing as local development increases. To assess the effects of ground-water withdrawals on Pinelands stream and wetland water levels, three drainage basins were selected for detailed hydrologic assessments, including the Albertson Brook, McDonalds Branch and the Morses Mill Stream basins. Study areas were defined surrounding the three drainage basins to provide sub-regional hydrogeologic data for the ground-water flow modeling phase of this study. In the first phase of the hydrologic assessments, a database of hydrogeologic information and a hydrogeologic framework model for each of the three study areas were produced. These framework models, which illustrate typical hydrogeologic variations among different geographic subregions of the Pinelands, are the structural foundation for predictive ground-water flow models to be used in assessing the hydrologic effects of increased ground-water withdrawals. During 2004-05, a hydrogeologic database was compiled using existing and new geophysical and lithologic data including suites of geophysical logs collected at 7 locations during the drilling of 21 wells and one deep boring within the three study areas. In addition, 27 miles of ground-penetrating radar (GPR) surface geophysical data were collected and analyzed to determine the depth and extent of shallow clays in the general vicinity of the streams. On the basis of these data, the Kirkwood-Cohansey aquifer system was divided into 7 layers to construct a hydrogeologic framework model for each study area. These layers are defined by their predominant sediment textures as aquifers and leaky confining layers. The confining layer at the base of the Kirkwood-Cohansey aquifer system, depending on location, is defined as one of two distinct clays of the Kirkwood Formation. The framework models are described using hydrogeologic sections, maps of structure tops of layers, and thickness maps showing variations of sediment textures of the various model layers. The three framework models are similar in structure but unique to their respective study areas. The hydraulic conductivity of the Kirkwood-Cohansey aquifer system in the vicinity of the three study areas was determined from analysis of 16 slug tests and 136 well-performance tests. The mean values for hydraulic conductivity in the three study areas ranged from about 84 feet per day to 130 feet per day. With the exception of the basal confining layers, the variable and discontinuous nature of clay layers within the Kirkwood-Cohansey aquifer system was confirmed by the geophysical and lithologic records. Leaky confining layers and discontinuous clays are generally more common in the upper part of the aquifer system. Although the Kirkwood-Cohansey aquifer system generally has been considered a water-table aquifer in most areas, localized clays in the aquifer layers and the effectiveness of the leaky confining layers may act to impede the flow of ground water in varying amounts depending on the degree of confinement and the location, duration, and magnitude of the hydraulic stresses applied. Considerable variability exists in the different sediment textures. The extent to which this hydrogeologic variability can be characterized is constrained by the extent of the available data. Thus, the hydraulic properties of the modeled layers were estimated on the basis of available horizontal hydraulic conductivity data and the range of sediment textures estimated from geophysical and lithologic data.
Studying Si/SiGe disordered alloys within effective mass theory
NASA Astrophysics Data System (ADS)
Gamble, John; Montaño, Inès; Carroll, Malcolm S.; Muller, Richard P.
Si/SiGe is an attractive material system for electrostatically-defined quantum dot qubits due to its high-quality crystalline quantum well interface. Modeling the properties of single-electron quantum dots in this system is complicated by the presence of alloy disorder, which typically requires atomistic techniques in order to treat properly. Here, we use the NEMO-3D empirical tight binding code to calibrate a multi-valley effective mass theory (MVEMT) to properly handle alloy disorder. The resulting MVEMT simulations give good insight into the essential physics of alloy disorder, while being extremely computationally efficient and well-suited to determining statistical properties. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.
Conceptual models of information processing
NASA Technical Reports Server (NTRS)
Stewart, L. J.
1983-01-01
The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.
Modeling Functional Neuroanatomy for an Anatomy Information System
Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan
2008-01-01
Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841
A framework for evolutionary systems biology
Loewe, Laurence
2009-01-01
Background Many difficult problems in evolutionary genomics are related to mutations that have weak effects on fitness, as the consequences of mutations with large effects are often simple to predict. Current systems biology has accumulated much data on mutations with large effects and can predict the properties of knockout mutants in some systems. However experimental methods are too insensitive to observe small effects. Results Here I propose a novel framework that brings together evolutionary theory and current systems biology approaches in order to quantify small effects of mutations and their epistatic interactions in silico. Central to this approach is the definition of fitness correlates that can be computed in some current systems biology models employing the rigorous algorithms that are at the core of much work in computational systems biology. The framework exploits synergies between the realism of such models and the need to understand real systems in evolutionary theory. This framework can address many longstanding topics in evolutionary biology by defining various 'levels' of the adaptive landscape. Addressed topics include the distribution of mutational effects on fitness, as well as the nature of advantageous mutations, epistasis and robustness. Combining corresponding parameter estimates with population genetics models raises the possibility of testing evolutionary hypotheses at a new level of realism. Conclusion EvoSysBio is expected to lead to a more detailed understanding of the fundamental principles of life by combining knowledge about well-known biological systems from several disciplines. This will benefit both evolutionary theory and current systems biology. Understanding robustness by analysing distributions of mutational effects and epistasis is pivotal for drug design, cancer research, responsible genetic engineering in synthetic biology and many other practical applications. PMID:19239699
NASA Astrophysics Data System (ADS)
Timashev, S. F.
2000-02-01
A general phenomenological approach to the analysis of experimental temporal, spatial and energetic series for extracting truly physical non-model parameters ("passport data") is presented, which may be used to characterize and distinguish the evolution as well as the spatial and energetic structure of any open nonlinear dissipative system. This methodology is based on a postulate concerning the crucial information contained in the sequences of non-regularities of the measured dynamic variable (temporal, spatial, energetic). In accordance with this approach, multi-parametric formulas for dynamic variable power spectra as well as for structural functions of different orders are identical for every spatial-temporal-energetic level of the system under consideration. In effect, this entails the introduction of a new kind of self-similarity in Nature. An algorithm has been developed for obtaining as many "passport data" as are necessary for the characterization of a dynamic system. Applications of this approach in the analysis of various experimental series (temporal, spatial, energetic) demonstrate its potential for defining adequate phenomenological parameters of different dynamic processes and structures.
Static shape control for flexible structures
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.
NASA Astrophysics Data System (ADS)
Syaina, L. P.; Majidi, M. A.
2018-04-01
Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.
NASA Astrophysics Data System (ADS)
Nickel, D.; Barthel, R.; Schmid, C.; Braun, J.
2003-04-01
The research project GLOWA-Danube, financed by the German Federal Government, investigates long-term changes in the water cycle of the Upper Danube river basin in light of global climatic change. Its concrete aim is to build a fully integrated decision support tool that combines the competence of eleven different institutes in domains covering all major aspects governing the water cycle - from the formation of clouds to groundwater flow patterns to the behaviour of the water consumer. The research group "Water Supply" at the Institute of Hydraulic Engineering (IWS), Universitaet Stuttgart, has the central task of creating an agent-based model of the water supply sector. The Water Supply model will act as a link between the various physical models determining water quality and availability on the one hand and the actors models determining water demand on the other, which together form the tool DANUBIA. Ultimately, with the help of scenario testing, the water supply model will indicate the ability of the water supply system in the Upper Danube catchment to adapt to changing boundary conditions using different management approaches. The specific aim of the Water Supply model is the creation of a model which is not only able to simulate the present day system of water extraction, treatment and distribution but also its development and change with time. As most changes to the system are brought about by decisions made by relevant actors in the field of water management or their behaviour (in response to political and economic boundary conditions, changes in water demand or water quality, advances in technology etc.), the use of agent-based modelling was chosen, whereby an agent is seen as a computer system (in our case representing a human or group of humans) which is aware of its environment, has defined objectives and is able to act independently in order to meet these objectives. Whereas agent-based modelling has received much attention over the past decades, the use of this type of modelling for water supply systems is something very new. The initial step is the development of a conceptual water supply model (using JAVA), in which both the model boundaries and area of expertise as well as parameters to be exchanged between the Water Supply model and other models are defined. The data required to create model for such a large area is not available from the authorities, common interest organisations or in the public statistics. In order to gain access to more specific information regarding individual water supply companies, the Water Supply group is currently carrying out a wide-spread questionnaire addressed to all water supply companies in the GLOWA-Danube model area - well over 1000 in total in Bavaria, Baden-Wuerttemberg, Austria and Switzerland. The questionnaire contains questions pertaining to the two distinct fields, "economics and pricing" and "technical aspects", and aims at gathering information regarding the present day situation of the water supply system, the developments over the past 10 years as well as planned developments for the immediate future. Later, the focus will shift towards the stakeholders from the field of water resources management. A catalogue of decision-making rules will be prepared as a basis for discussion and will be debated with the relevant stakeholders. These rules will provide the basis for decision-making algorithms which will allow model agents to respond to their environment, communicate with one anther and behave in a goal-oriented manner to bring about change in the water supply system in response to changing conditions with regard to the climate, water quality, political and social boundary conditions, and changing demand.
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Kelley, Gary W.
2012-01-01
The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.
Development of 3D Atlas of Metalworking Equipment
NASA Astrophysics Data System (ADS)
Yevgenyevna Maslennikova, Olga; Borisovna Nazarova, Olga; Aleksandrovna Chudinova, Yulia
2018-05-01
The paper is dedicated to solving the problem of developing innovative educational systems able to train personnel of complex and dangerous manufacturing industries (such as in metallurgy) to control the process not only under regular conditions, but in emergency and pre-emergency situations as well. At that, such educational systems shall transform training of future and current engineers into a professional activity, model both subject matter and social content of their professional labor. Key characteristics of a 3D atlas of equipment as an educational system are given, as it provides immersion of trainees into professional environment. Requirements for such systems are defined (functional, information, software and technical). Stages of development of a 3D atlas of equipment as an automated system are given, allowing one to get closer to yet another problem that of IT specialist training so that they are able to design, implement and deploy such systems.
A Conceptual Model of the Information Requirements of Nursing Organizations
Miller, Emmy
1989-01-01
Three related issues play a role in the identification of the information requirements of nursing organizations. These issues are the current state of computer systems in health care organizations, the lack of a well-defined data set for nursing, and the absence of models representing data and information relevant to clinical and administrative nursing practice. This paper will examine current methods of data collection, processing, and storage in clinical and administrative nursing practice for the purpose of identifying the information requirements of nursing organizations. To satisfy these information requirements, database technology can be used; however, a model for database design is needed that reflects the conceptual framework of nursing and the professional concerns of nurses. A conceptual model of the types of data necessary to produce the desired information will be presented and the relationships among data will be delineated.
The muscle spindle as a feedback element in muscle control
NASA Technical Reports Server (NTRS)
Andrews, L. T.; Iannone, A. M.; Ewing, D. J.
1973-01-01
The muscle spindle, the feedback element in the myotatic (stretch) reflex, is a major contributor to muscular control. Therefore, an accurate description of behavior of the muscle spindle during active contraction of the muscle, as well as during passive stretch, is essential to the understanding of muscle control. Animal experiments were performed in order to obtain the data necessary to model the muscle spindle. Spectral density functions were used to identify a linear approximation of the two types of nerve endings from the spindle. A model reference adaptive control system was used on a hybrid computer to optimize the anatomically defined lumped parameter estimate of the spindle. The derived nonlinear model accurately predicts the behavior of the muscle spindle both during active discharge and during its silent period. This model is used to determine the mechanism employed to control muscle movement.
Tercjak, A; Garcia, I; Mondragon, I
2008-07-09
Novel well-defined nanostructured thermosetting systems were prepared by modification of a diglicydylether of bisphenol-A epoxy resin (DGEBA) with 10 or 15 wt% amphiphilic poly(styrene-b-ethylene oxide) block copolymer (PSEO) and 30 or 40 wt% low molecular weight liquid crystal 4'-(hexyl)-4-biphenyl-carbonitrile (HBC) using m-xylylenediamine (MXDA) as a curing agent. The competition between well-defined nanostructured materials and the ability for alignment of the liquid crystal phase in the materials obtained has been studied by atomic and electrostatic force microscopy, AFM and EFM, respectively. Based on our knowledge, this is the first time that addition of an adequate amount (10 wt%) of a block copolymer to 40 wt% HBC-(DGEBA/MXDA) leads to a well-organized nanostructured thermosetting system (between a hexagonal and worm-like ordered structure), which is also electro-responsive with high rate contrast. This behavior was confirmed using electrostatic force microscopy (EFM), by means of the response of the HBC liquid crystal phase to the voltage applied to the EFM tip. In contrast, though materials containing 15 wt% PSEO and 30 wt% HBC also form a well-defined nanostructured thermosetting system, they do not show such a high contrast between the uncharged and charged surface.
Controlling the physics and chemistry of binary and ternary praseodymium and cerium oxide systems.
Niu, Gang; Zoellner, Marvin Hartwig; Schroeder, Thomas; Schaefer, Andreas; Jhang, Jin-Hao; Zielasek, Volkmar; Bäumer, Marcus; Wilkens, Henrik; Wollschläger, Joachim; Olbrich, Reinhard; Lammers, Christian; Reichling, Michael
2015-10-14
Rare earth praseodymium and cerium oxides have attracted intense research interest in the last few decades, due to their intriguing chemical and physical characteristics. An understanding of the correlation between structure and properties, in particular the surface chemistry, is urgently required for their application in microelectronics, catalysis, optics and other fields. Such an understanding is, however, hampered by the complexity of rare earth oxide materials and experimental methods for their characterisation. Here, we report recent progress in studying high-quality, single crystalline, praseodymium and cerium oxide films as well as ternary alloys grown on Si(111) substrates. Using these well-defined systems and based on a systematic multi-technique surface science approach, the corresponding physical and chemical properties, such as the surface structure, the surface morphology, the bulk-surface interaction and the oxygen storage/release capability, are explored in detail. We show that specifically the crystalline structure and the oxygen stoichiometry of the oxide thin films can be well controlled by the film preparation method. This work leads to a comprehensive understanding of the properties of rare earth oxides and highlights the applications of these versatile materials. Furthermore, methanol adsorption studies are performed on binary and ternary rare earth oxide thin films, demonstrating the feasibility of employing such systems for model catalytic studies. Specifically for ceria systems, we find considerable stability against normal environmental conditions so that they can be considered as a "materials bridge" between surface science models and real catalysts.
Dynamics of protein-protein encounter: a Langevin equation approach with reaction patches.
Schluttig, Jakob; Alamanova, Denitsa; Helms, Volkhard; Schwarz, Ulrich S
2008-10-21
We study the formation of protein-protein encounter complexes with a Langevin equation approach that considers direct, steric, and thermal forces. As three model systems with distinctly different properties we consider the pairs barnase:barstar, cytochrome c-cytochrome c peroxidase, and p53:MDM2. In each case, proteins are modeled either as spherical particles, as dipolar spheres, or as collection of several small beads with one dipole. Spherical reaction patches are placed on the model proteins according to the known experimental structures of the protein complexes. In the computer simulations, concentration is varied by changing box size. Encounter is defined as overlap of the reaction patches and the corresponding first passage times are recorded together with the number of unsuccessful contacts before encounter. We find that encounter frequency scales linearly with protein concentration, thus proving that our microscopic model results in a well-defined macroscopic encounter rate. The number of unsuccessful contacts before encounter decreases with increasing encounter rate and ranges from 20 to 9000. For all three models, encounter rates are obtained within one order of magnitude of the experimentally measured association rates. Electrostatic steering enhances association up to 50-fold. If diffusional encounter is dominant (p53:MDM2) or similarly important as electrostatic steering (barnase:barstar), then encounter rate decreases with decreasing patch radius. More detailed modeling of protein shapes decreases encounter rates by 5%-95%. Our study shows how generic principles of protein-protein association are modulated by molecular features of the systems under consideration. Moreover it allows us to assess different coarse-graining strategies for the future modeling of the dynamics of large protein complexes.
Dynamics of protein-protein encounter: A Langevin equation approach with reaction patches
NASA Astrophysics Data System (ADS)
Schluttig, Jakob; Alamanova, Denitsa; Helms, Volkhard; Schwarz, Ulrich S.
2008-10-01
We study the formation of protein-protein encounter complexes with a Langevin equation approach that considers direct, steric, and thermal forces. As three model systems with distinctly different properties we consider the pairs barnase:barstar, cytochrome c-cytochrome c peroxidase, and p53:MDM2. In each case, proteins are modeled either as spherical particles, as dipolar spheres, or as collection of several small beads with one dipole. Spherical reaction patches are placed on the model proteins according to the known experimental structures of the protein complexes. In the computer simulations, concentration is varied by changing box size. Encounter is defined as overlap of the reaction patches and the corresponding first passage times are recorded together with the number of unsuccessful contacts before encounter. We find that encounter frequency scales linearly with protein concentration, thus proving that our microscopic model results in a well-defined macroscopic encounter rate. The number of unsuccessful contacts before encounter decreases with increasing encounter rate and ranges from 20 to 9000. For all three models, encounter rates are obtained within one order of magnitude of the experimentally measured association rates. Electrostatic steering enhances association up to 50-fold. If diffusional encounter is dominant (p53:MDM2) or similarly important as electrostatic steering (barnase:barstar), then encounter rate decreases with decreasing patch radius. More detailed modeling of protein shapes decreases encounter rates by 5%-95%. Our study shows how generic principles of protein-protein association are modulated by molecular features of the systems under consideration. Moreover it allows us to assess different coarse-graining strategies for the future modeling of the dynamics of large protein complexes.
A computer program for simulating geohydrologic systems in three dimensions
Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.
1980-01-01
This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)