Sample records for discrete-feature model implementation

  1. Mathematical Model Taking into Account Nonlocal Effects of Plasmonic Structures on the Basis of the Discrete Source Method

    NASA Astrophysics Data System (ADS)

    Eremin, Yu. A.; Sveshnikov, A. G.

    2018-04-01

    The discrete source method is used to develop and implement a mathematical model for solving the problem of scattering electromagnetic waves by a three-dimensional plasmonic scatterer with nonlocal effects taken into account. Numerical results are presented whereby the features of the scattering properties of plasmonic particles with allowance for nonlocal effects are demonstrated depending on the direction and polarization of the incident wave.

  2. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  3. Control of automated behavior: insights from the discrete sequence production task

    PubMed Central

    Abrahamse, Elger L.; Ruitenberg, Marit F. L.; de Kleine, Elian; Verwey, Willem B.

    2013-01-01

    Work with the discrete sequence production (DSP) task has provided a substantial literature on discrete sequencing skill over the last decades. The purpose of the current article is to provide a comprehensive overview of this literature and of the theoretical progress that it has prompted. We start with a description of the DSP task and the phenomena that are typically observed with it. Then we propose a cognitive model, the dual processor model (DPM), which explains performance of (skilled) discrete key-press sequences. Key features of this model are the distinction between a cognitive processor and a motor system (i.e., motor buffer and motor processor), the interplay between these two processing systems, and the possibility to execute familiar sequences in two different execution modes. We further discuss how this model relates to several related sequence skill research paradigms and models, and we outline outstanding questions for future research throughout the paper. We conclude by sketching a tentative neural implementation of the DPM. PMID:23515430

  4. Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking

    NASA Astrophysics Data System (ADS)

    Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.

    2008-12-01

    Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.

  5. GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems

    PubMed Central

    Elmeligy Abdelhamid, Sherif H.; Kuhlman, Chris J.; Marathe, Madhav V.; Mortveit, Henning S.; Ravi, S. S.

    2015-01-01

    Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools. PMID:26263006

  6. GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems.

    PubMed

    Elmeligy Abdelhamid, Sherif H; Kuhlman, Chris J; Marathe, Madhav V; Mortveit, Henning S; Ravi, S S

    2015-01-01

    Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools.

  7. LDRD final report :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brost, Randolph C.; McLendon, William Clarence,

    2013-01-01

    Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less

  8. Six-degree-of-freedom aircraft simulation with mixed-data structure using the applied dynamics simulation language, ADSIM

    NASA Technical Reports Server (NTRS)

    Savaglio, Clare

    1989-01-01

    A realistic simulation of an aircraft in the flight using the AD 100 digital computer is presented. The implementation of three model features is specifically discussed: (1) a large aerodynamic data base (130,00 function values) which is evaluated using function interpolation to obtain the aerodynamic coefficients; (2) an option to trim the aircraft in longitudinal flight; and (3) a flight control system which includes a digital controller. Since the model includes a digital controller the simulation implements not only continuous time equations but also discrete time equations, thus the model has a mixed-data structure.

  9. Peridynamics with LAMMPS : a user guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.; Silling, Stewart Andrew; Plimpton, Steven James

    2008-01-01

    Peridynamics is a nonlocal formulation of continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamic model. This document details the implementation of a discrete peridynamic model within the LAMMPS molecular dynamic code. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized, and overviews the LAMMPS implementation. A nontrivial example problem is also included.

  10. A Generalization Strategy for Discrete Area Feature by Using Stroke Grouping and Polarization Transportation Selection

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Burghardt, Dirk

    2018-05-01

    This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.

  11. The effect of SUV discretization in quantitative FDG-PET Radiomics: the need for standardized methodology in tumor texture analysis

    NASA Astrophysics Data System (ADS)

    Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe

    2015-08-01

    FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.

  12. Adaptive Wavelet Modeling of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.

    2009-12-01

    Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  13. An explicit dissipation-preserving method for Riesz space-fractional nonlinear wave equations in multiple dimensions

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.

    2018-06-01

    In this work, we investigate numerically a model governed by a multidimensional nonlinear wave equation with damping and fractional diffusion. The governing partial differential equation considers the presence of Riesz space-fractional derivatives of orders in (1, 2], and homogeneous Dirichlet boundary data are imposed on a closed and bounded spatial domain. The model under investigation possesses an energy function which is preserved in the undamped regime. In the damped case, we establish the property of energy dissipation of the model using arguments from functional analysis. Motivated by these results, we propose an explicit finite-difference discretization of our fractional model based on the use of fractional centered differences. Associated to our discrete model, we also propose discretizations of the energy quantities. We establish that the discrete energy is conserved in the undamped regime, and that it dissipates in the damped scenario. Among the most important numerical features of our scheme, we show that the method has a consistency of second order, that it is stable and that it has a quadratic order of convergence. Some one- and two-dimensional simulations are shown in this work to illustrate the fact that the technique is capable of preserving the discrete energy in the undamped regime. For the sake of convenience, we provide a Matlab implementation of our method for the one-dimensional scenario.

  14. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    PubMed

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer dataset that comes along with the included Butterfly R package. In the included R script, a univariate feature selection method is used for the dimension reduction step, but in the future we wish to use a more powerful multivariate feature reduction method based on neural networks (Kriesel, 2007). A script written in R (designed to run on R studio) accompanies this article that implements this algorithm and is available at http://butterflygeraci.codeplex.com/. For details on the R package or for help installing the software refer to the accompanying document, Supporting Material and Appendix.

  15. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  16. A VHDL Core for Intrinsic Evolution of Discrete Time Filters with Signal Feedback

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Dutton, Kenneth

    2005-01-01

    The design of an Evolvable Machine VHDL Core is presented, representing a discrete-time processing structure capable of supporting control system applications. This VHDL Core is implemented in an FPGA and is interfaced with an evolutionary algorithm implemented in firmware on a Digital Signal Processor (DSP) to create an evolvable system platform. The salient features of this architecture are presented. The capability to implement IIR filter structures is presented along with the results of the intrinsic evolution of a filter. The robustness of the evolved filter design is tested and its unique characteristics are described.

  17. Implementation of Interaction Algorithm to Non-Matching Discrete Interfaces Between Structure and Fluid Mesh

    NASA Technical Reports Server (NTRS)

    Chen, Shu-Po

    1999-01-01

    This paper presents software for solving the non-conforming fluid structure interfaces in aeroelastic simulation. It reviews the algorithm of interpolation and integration, highlights the flexibility and the user-friendly feature that allows the user to select the existing structure and fluid package, like NASTRAN and CLF3D, to perform the simulation. The presented software is validated by computing the High Speed Civil Transport model.

  18. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  19. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Judith; Johnson, Timothy C.; Slater, Lee D.

    There is an increasing need to characterize discrete fractures away from boreholes to better define fracture distributions and monitor solute transport. We performed a 3D evaluation of static and time-lapse cross-borehole electrical resistivity tomography (ERT) data sets from a limestone quarry in which flow and transport are controlled by a bedding-plane feature. Ten boreholes were discretized using an unstructured tetrahedral mesh, and 2D panel measurements were inverted for a 3D distribution of conductivity. We evaluated the benefits of 3D versus 2.5D inversion of ERT data in fractured rock while including the use of borehole regularization disconnects (BRDs) and borehole conductivitymore » constraints. High-conductivity halos (inversion artifacts) surrounding boreholes were removed in static images when BRDs and borehole conductivity constraints were implemented. Furthermore, applying these constraints focused transient changes in conductivity resulting from solute transport on the bedding plane, providing a more physically reasonable model for conductivity changes associated with solute transport at this fractured rock site. Assuming bedding-plane continuity between fractures identified in borehole televiewer data, we discretized a planar region between six boreholes and applied a fracture regularization disconnect (FRD). Although the FRD appropriately focused conductivity changes on the bedding plane, the conductivity distribution within the discretized fracture was nonunique and dependent on the starting homogeneous model conductivity. Synthetic studies performed to better explain field observations showed that inaccurate electrode locations in boreholes resulted in low-conductivity halos surrounding borehole locations. These synthetic studies also showed that the recovery of the true conductivity within an FRD depended on the conductivity contrast between the host rock and fractures. Our findings revealed that the potential exists to improve imaging of fractured rock through 3D inversion and accurate modeling of boreholes. However, deregularization of localized features can result in significant electrical conductivity artifacts, especially when representing features with a high degree of spatial uncertainty.« less

  1. Discretization-dependent model for weakly connected excitable media

    NASA Astrophysics Data System (ADS)

    Arroyo, Pedro André; Alonso, Sergio; Weber dos Santos, Rodrigo

    2018-03-01

    Pattern formation has been widely observed in extended chemical and biological processes. Although the biochemical systems are highly heterogeneous, homogenized continuum approaches formed by partial differential equations have been employed frequently. Such approaches are usually justified by the difference of scales between the heterogeneities and the characteristic spatial size of the patterns. Under different conditions, for example, under weak coupling, discrete models are more adequate. However, discrete models may be less manageable, for instance, in terms of numerical implementation and mesh generation, than the associated continuum models. Here we study a model to approach discreteness which permits the computer implementation on general unstructured meshes. The model is cast as a partial differential equation but with a parameter that depends not only on heterogeneities sizes, as in the case of quasicontinuum models, but also on the discretization mesh. Therefore, we refer to it as a discretization-dependent model. We validate the approach in a generic excitable media that simulates three different phenomena: the propagation of action membrane potential in cardiac tissue, in myelinated axons of neurons, and concentration waves in chemical microemulsions.

  2. RINGMesh: A programming library for developing mesh-based geomodeling applications

    NASA Astrophysics Data System (ADS)

    Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume

    2017-07-01

    RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.

  3. Integrable discrete PT symmetric model.

    PubMed

    Ablowitz, Mark J; Musslimani, Ziad H

    2014-09-01

    An exactly solvable discrete PT invariant nonlinear Schrödinger-like model is introduced. It is an integrable Hamiltonian system that exhibits a nontrivial nonlinear PT symmetry. A discrete one-soliton solution is constructed using a left-right Riemann-Hilbert formulation. It is shown that this pure soliton exhibits unique features such as power oscillations and singularity formation. The proposed model can be viewed as a discretization of a recently obtained integrable nonlocal nonlinear Schrödinger equation.

  4. Implementation of a Low-Thrust Trajectory Optimization Algorithm for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Sims, Jon A.; Finlayson, Paul A.; Rinderle, Edward A.; Vavrina, Matthew A.; Kowalkowski, Theresa D.

    2006-01-01

    A tool developed for the preliminary design of low-thrust trajectories is described. The trajectory is discretized into segments and a nonlinear programming method is used for optimization. The tool is easy to use, has robust convergence, and can handle many intermediate encounters. In addition, the tool has a wide variety of features, including several options for objective function and different low-thrust propulsion models (e.g., solar electric propulsion, nuclear electric propulsion, and solar sail). High-thrust, impulsive trajectories can also be optimized.

  5. Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quadros, William Roshan; Owen, Steven James

    2010-04-01

    We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less

  6. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  7. The Effects of Video Modeling with Voiceover Instruction on Accurate Implementation of Discrete-Trial Instruction

    ERIC Educational Resources Information Center

    Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…

  8. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Reducing student stereotypy by improving teachers' implementation of discrete-trial teaching.

    PubMed

    Dib, Nancy; Sturmey, Peter

    2007-01-01

    Discrete-trial teaching is an instructional method commonly used to teach social and academic skills to children with an autism spectrum disorder. The purpose of the current study was to evaluate the indirect effects of discrete-trial teaching on 3 students' stereotypy. Instructions, feedback, modeling, and rehearsal were used to improve 3 teaching aides' implementation of discrete-trial teaching in a private school for children with autism. Improvements in accurate teaching were accompanied by systematic decreases in students' levels of stereotypy.

  10. An advanced environment for hybrid modeling of biological systems based on modelica.

    PubMed

    Pross, Sabrina; Bachmann, Bernhard

    2011-01-20

    Biological systems are often very complex so that an appropriate formalism is needed for modeling their behavior. Hybrid Petri Nets, consisting of time-discrete Petri Net elements as well as continuous ones, have proven to be ideal for this task. Therefore, a new Petri Net library was implemented based on the object-oriented modeling language Modelica which allows the modeling of discrete, stochastic and continuous Petri Net elements by differential, algebraic and discrete equations. An appropriate Modelica-tool performs the hybrid simulation with discrete events and the solution of continuous differential equations. A special sub-library contains so-called wrappers for specific reactions to simplify the modeling process. The Modelica-models can be connected to Simulink-models for parameter optimization, sensitivity analysis and stochastic simulation in Matlab. The present paper illustrates the implementation of the Petri Net component models, their usage within the modeling process and the coupling between the Modelica-tool Dymola and Matlab/Simulink. The application is demonstrated by modeling the metabolism of Chinese Hamster Ovary Cells.

  11. Designing perturbative metamaterials from discrete models.

    PubMed

    Matlack, Kathryn H; Serra-Garcia, Marc; Palermo, Antonio; Huber, Sebastian D; Daraio, Chiara

    2018-04-01

    Identifying material geometries that lead to metamaterials with desired functionalities presents a challenge for the field. Discrete, or reduced-order, models provide a concise description of complex phenomena, such as negative refraction, or topological surface states; therefore, the combination of geometric building blocks to replicate discrete models presenting the desired features represents a promising approach. However, there is no reliable way to solve such an inverse problem. Here, we introduce 'perturbative metamaterials', a class of metamaterials consisting of weakly interacting unit cells. The weak interaction allows us to associate each element of the discrete model with individual geometric features of the metamaterial, thereby enabling a systematic design process. We demonstrate our approach by designing two-dimensional elastic metamaterials that realize Veselago lenses, zero-dispersion bands and topological surface phonons. While our selected examples are within the mechanical domain, the same design principle can be applied to acoustic, thermal and photonic metamaterials composed of weakly interacting unit cells.

  12. Simulation of the space station information system in Ada

    NASA Technical Reports Server (NTRS)

    Spiegel, James R.

    1986-01-01

    The Flexible Ada Simulation Tool (FAST) is a discrete event simulation language which is written in Ada. FAST has been used to simulate a number of options for ground data distribution of Space Station payload data. The fact that Ada language is used for implementation has allowed a number of useful interactive features to be built into FAST and has facilitated quick enhancement of its capabilities to support new modeling requirements. General simulation concepts are discussed, and how these concepts are implemented in FAST. The FAST design is discussed, and it is pointed out how the used of the Ada language enabled the development of some significant advantages over classical FORTRAN based simulation languages. The advantages discussed are in the areas of efficiency, ease of debugging, and ease of integrating user code. The specific Ada language features which enable these advances are discussed.

  13. Laban Movement Analysis towards Behavior Patterns

    NASA Astrophysics Data System (ADS)

    Santos, Luís; Dias, Jorge

    This work presents a study about the use of Laban Movement Analysis (LMA) as a robust tool to describe human basic behavior patterns, to be applied in human-machine interaction. LMA is a language used to describe and annotate dancing movements and is divided in components [1]: Body, Space, Shape and Effort. Despite its general framework is widely used in physical and mental therapy [2], it has found little application in the engineering domain. Rett J. [3] proposed to implement LMA using Bayesian Networks. However LMA component models have not yet been fully implemented. A study on how to approach behavior using LMA is presented. Behavior is a complex feature and movement chain, but we believe that most basic behavior primitives can be discretized in simple features. Correctly identifying Laban parameters and the movements the authors feel that good patterns can be found within a specific set of basic behavior semantics.

  14. A general gridding, discretization, and coarsening methodology for modeling flow in porous formations with discrete geological features

    NASA Astrophysics Data System (ADS)

    Karimi-Fard, M.; Durlofsky, L. J.

    2016-10-01

    A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.

  15. Activity Diagrams for DEVS Models: A Case Study Modeling Health Care Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J

    Discrete Event Systems Specification (DEVS) is a widely used formalism for modeling and simulation of discrete and continuous systems. While DEVS provides a sound mathematical representation of discrete systems, its practical use can suffer when models become complex. Five main functions, which construct the core of atomic modules in DEVS, can realize the behaviors that modelers want to represent. The integration of these functions is handled by the simulation routine, however modelers can implement each function in various ways. Therefore, there is a need for graphical representations of complex models to simplify their implementation and facilitate their reproduction. In thismore » work, we illustrate the use of activity diagrams for this purpose in the context of a health care behavior model, which is developed with an agent-based modeling paradigm.« less

  16. The graphical brain: Belief propagation and active inference

    PubMed Central

    Friston, Karl J.; Parr, Thomas; de Vries, Bert

    2018-01-01

    This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models. Crucially, these models can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to elucidate the requisite message passing in terms of its form and scheduling. To accommodate mixed generative models (of discrete and continuous states), one also has to consider link nodes or factors that enable discrete and continuous representations to talk to each other. When mapping the implicit computational architecture onto neuronal connectivity, several interesting features emerge. For example, Bayesian model averaging and comparison, which link discrete and continuous states, may be implemented in thalamocortical loops. These and other considerations speak to a computational connectome that is inherently state dependent and self-organizing in ways that yield to a principled (variational) account. We conclude with simulations of reading that illustrate the implicit neuronal message passing, with a special focus on how discrete (semantic) representations inform, and are informed by, continuous (visual) sampling of the sensorium. Author Summary This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states. This leads to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to characterize the requisite message passing, and link this formal characterization to canonical microcircuits and extrinsic connectivity in the brain. PMID:29417960

  17. Implementing ARFORGEN: Installation Capability and Feasibility Study of Meeting ARFORGEN Guidelines

    DTIC Science & Technology

    2007-07-26

    aligning troop requirements with the Army’s new strategic mission, the force stabilization element of ARFORGEN was developed to raise the morale of...a discrete event simulation model developed for the project to mirror the reset process. The Unit Reset model is implemented in Java as a discrete...and transportation. Further, the typical installation support staff is manned by a Table of Distribution and Allowance ( TDA ) designed to

  18. The Knowledge Program: an Innovative, Comprehensive Electronic Data Capture System and Warehouse

    PubMed Central

    Katzan, Irene; Speck, Micheal; Dopler, Chris; Urchek, John; Bielawski, Kay; Dunphy, Cheryl; Jehi, Lara; Bae, Charles; Parchman, Alandra

    2011-01-01

    Data contained in the electronic health record (EHR) present a tremendous opportunity to improve quality-of-care and enhance research capabilities. However, the EHR is not structured to provide data for such purposes: most clinical information is entered as free text and content varies substantially between providers. Discrete information on patients’ functional status is typically not collected. Data extraction tools are often unavailable. We have developed the Knowledge Program (KP), a comprehensive initiative to improve the collection of discrete clinical information into the EHR and the retrievability of data for use in research, quality, and patient care. A distinct feature of the KP is the systematic collection of patient-reported outcomes, which is captured discretely, allowing more refined analyses of care outcomes. The KP capitalizes on features of the Epic EHR and utilizes an external IT infrastructure distinct from Epic for enhanced functionality. Here, we describe the development and implementation of the KP. PMID:22195124

  19. Pricing index-based catastrophe bonds: Part 1: Formulation and discretization issues using a numerical PDE approach

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the first installment in a two-part series, and focuses on the development of a numerical PDE approach to price components of a Bermudan-style callable catastrophe (CAT) bond. The bond is based on two underlying stochastic variables; the PCS index which posts quarterly estimates of industry-wide hurricane losses as well as a single-factor CIR interest rate model for the three-month LIBOR. The aggregate PCS index is analogous to losses claimed under traditional reinsurance in that it is used to specify a reinsurance layer. The proposed CAT bond model contains a Bermudan-style call feature designed to allow the reinsurer to minimize their interest rate risk exposure on making substantial fixed coupon payments using capital from the reinsurance premium. Numerical PDE methods are the fundamental strategy for pricing early-exercise constraints, such as the Bermudan-style call feature, into contingent claim models. Therefore, the objective and unique contribution of this first installment in the two-part series is to develop a formulation and discretization strategy for the proposed CAT bond model utilizing a numerical PDE approach. Object-oriented code design is fundamental to the numerical methods used to aggregate the PCS index, and implement the call feature. Therefore, object-oriented design issues that relate specifically to the development of a numerical PDE approach for the component of the proposed CAT bond model that depends on the PCS index and LIBOR are described here. Formulation, numerical methods and code design issues that relate to aggregating the PCS index and introducing the call option are the subject of the companion paper.

  20. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  1. An implementation of discrete electron transport models for gold in the Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Sakata, D.; Incerti, S.; Bordage, M. C.; Lampe, N.; Okada, S.; Emfietzoglou, D.; Kyriakou, I.; Murakami, K.; Sasaki, T.; Tran, H.; Guatelli, S.; Ivantchenko, V. N.

    2016-12-01

    Gold nanoparticle (GNP) boosted radiation therapy can enhance the biological effectiveness of radiation treatments by increasing the quantity of direct and indirect radiation-induced cellular damage. As the physical effects of GNP boosted radiotherapy occur across energy scales that descend down to 10 eV, Monte Carlo simulations require discrete physics models down to these very low energies in order to avoid underestimating the absorbed dose and secondary particle generation. Discrete physics models for electron transportation down to 10 eV have been implemented within the Geant4-DNA low energy extension of Geant4. Such models allow the investigation of GNP effects at the nanoscale. At low energies, the new models have better agreement with experimental data on the backscattering coefficient, and they show similar performance for transmission coefficient data as the Livermore and Penelope models already implemented in Geant4. These new models are applicable in simulations focussed towards estimating the relative biological effectiveness of radiation in GNP boosted radiotherapy applications with photon and electron radiation sources.

  2. Training shelter volunteers to teach dog compliance.

    PubMed

    Howard, Veronica J; DiGennaro Reed, Florence D

    2014-01-01

    This study examined the degree to which training procedures influenced the integrity of behaviorally based dog training implemented by volunteers of an animal shelter. Volunteers were taught to implement discrete-trial obedience training to teach 2 skills (sit and wait) to dogs. Procedural integrity during the baseline and written instructions conditions was low across all participants. Although performance increased with use of a video model, integrity did not reach criterion levels until performance feedback and modeling were provided. Moreover, the integrity of the discrete-trial training procedure was significantly and positively correlated with dog compliance to instructions for all dyads. Correct implementation and compliance were observed when participants were paired with a novel dog and trainer, respectively, although generalization of procedural integrity from the discrete-trial sit procedure to the discrete-trial wait procedure was not observed. Shelter consumers rated the behavior change in dogs and trainers as socially significant. Implications of these findings and future directions for research are discussed. © Society for the Experimental Analysis of Behavior.

  3. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  4. Video modeling to train staff to implement discrete-trial instruction.

    PubMed

    Catania, Cynthia N; Almeida, Daniel; Liu-Constant, Brian; DiGennaro Reed, Florence D

    2009-01-01

    Three new direct-service staff participated in a program that used a video model to train target skills needed to conduct a discrete-trial session. Percentage accuracy in completing a discrete-trial teaching session was evaluated using a multiple baseline design across participants. During baseline, performances ranged from a mean of 12% to 63% accuracy. During video modeling, there was an immediate increase in accuracy to a mean of 98%, 85%, and 94% for each participant. Performance during maintenance and generalization probes remained at high levels. Results suggest that video modeling can be an effective technique to train staff to conduct discrete-trial sessions.

  5. THE EFFECTS OF VIDEO MODELING WITH VOICEOVER INSTRUCTION ON ACCURATE IMPLEMENTATION OF DISCRETE-TRIAL INSTRUCTION

    PubMed Central

    Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI. PMID:22844149

  6. The effects of video modeling with voiceover instruction on accurate implementation of discrete-trial instruction.

    PubMed

    Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI.

  7. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juno, J.; Hakim, A.; TenBarge, J.

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  8. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE PAGES

    Juno, J.; Hakim, A.; TenBarge, J.; ...

    2017-10-10

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  9. Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi

    2009-01-01

    In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less

  10. Phenotypic switching in bacteria

    NASA Astrophysics Data System (ADS)

    Merrin, Jack

    Living matter is a non-equilibrium system in which many components work in parallel to perpetuate themselves through a fluctuating environment. Physiological states or functionalities revealed by a particular environment are called phenotypes. Transitions between phenotypes may occur either spontaneously or via interaction with the environment. Even in the same environment, genetically identical bacteria can exhibit different phenotypes of a continuous or discrete nature. In this thesis, we pursued three lines of investigation into discrete phenotypic heterogeneity in bacterial populations: the quantitative characterization of the so-called bacterial persistence, a theoretical model of phenotypic switching based on those measurements, and the design of artificial genetic networks which implement this model. Persistence is the phenotype of a subpopulation of bacteria with a reduced sensitivity to antibiotics. We developed a microfluidic apparatus, which allowed us to monitor the growth rates of individual cells while applying repeated cycles of antibiotic treatments. We were able to identify distinct phenotypes (normal and persistent) and characterize the stochastic transitions between them. We also found that phenotypic heterogeneity was present prior to any environmental cue such as antibiotic exposure. Motivated by the experiments with persisters, we formulated a theoretical model describing the dynamic behavior of several discrete phenotypes in a periodically varying environment. This theoretical framework allowed us to quantitatively predict the fitness of dynamic populations and to compare survival strategies according to environmental time-symmetries. These calculations suggested that persistence is a strategy used by bacterial populations to adapt to fluctuating environments. Knowledge of the phenotypic transition rates for persistence may provide statistical information about the typical environments of bacteria. We also describe a design of artificial genetic networks that would implement a more general theoretical model of phenotypic switching. We will use a new cloning strategy in order to systematically assemble a large number of genetic features, such as site-specific recombination components from the R64 plasmid, which invert several coexisting DNA segments. The inversion of these segments would lead to discrete phenotypic transitions inside a living cell. These artificial phenotypic switches can be controlled precisely in experiments and may serve as a benchmark for their natural counterparts.

  11. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  12. Robust feature detection and local classification for surfaces based on moment analysis.

    PubMed

    Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru

    2004-01-01

    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.

  13. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  14. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  15. The application of neural networks to myoelectric signal analysis: a preliminary study.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1990-03-01

    Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.

  16. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  17. Conservative, unconditionally stable discretization methods for Hamiltonian equations, applied to wave motion in lattice equations modeling protein molecules

    NASA Astrophysics Data System (ADS)

    LeMesurier, Brenton

    2012-01-01

    A new approach is described for generating exactly energy-momentum conserving time discretizations for a wide class of Hamiltonian systems of DEs with quadratic momenta, including mechanical systems with central forces; it is well-suited in particular to the large systems that arise in both spatial discretizations of nonlinear wave equations and lattice equations such as the Davydov System modeling energetic pulse propagation in protein molecules. The method is unconditionally stable, making it well-suited to equations of broadly “Discrete NLS form”, including many arising in nonlinear optics. Key features of the resulting discretizations are exact conservation of both the Hamiltonian and quadratic conserved quantities related to continuous linear symmetries, preservation of time reversal symmetry, unconditional stability, and respecting the linearity of certain terms. The last feature allows a simple, efficient iterative solution of the resulting nonlinear algebraic systems that retain unconditional stability, avoiding the need for full Newton-type solvers. One distinction from earlier work on conservative discretizations is a new and more straightforward nearly canonical procedure for constructing the discretizations, based on a “discrete gradient calculus with product rule” that mimics the essential properties of partial derivatives. This numerical method is then used to study the Davydov system, revealing that previously conjectured continuum limit approximations by NLS do not hold, but that sech-like pulses related to NLS solitons can nevertheless sometimes arise.

  18. Complexity and chaos control in a discrete-time prey-predator model

    NASA Astrophysics Data System (ADS)

    Din, Qamar

    2017-08-01

    We investigate the complex behavior and chaos control in a discrete-time prey-predator model. Taking into account the Leslie-Gower prey-predator model, we propose a discrete-time prey-predator system with predator partially dependent on prey and investigate the boundedness, existence and uniqueness of positive equilibrium and bifurcation analysis of the system by using center manifold theorem and bifurcation theory. Various feedback control strategies are implemented for controlling the bifurcation and chaos in the system. Numerical simulations are provided to illustrate theoretical discussion.

  19. The Holst spin foam model via cubulations

    NASA Astrophysics Data System (ADS)

    Baratin, Aristide; Flori, Cecilia; Thiemann, Thomas

    2012-10-01

    Spin foam models are an attempt at a covariant or path integral formulation of canonical loop quantum gravity. The construction of such models usually relies on the Plebanski formulation of general relativity as a constrained BF theory and is based on the discretization of the action on a simplicial triangulation, which may be viewed as an ultraviolet regulator. The triangulation dependence can be removed by means of group field theory techniques, which allows one to sum over all triangulations. The main tasks for these models are the correct quantum implementation of the Plebanski constraints, the existence of a semiclassical sector implementing additional ‘Regge-like’ constraints arising from simplicial triangulations and the definition of the physical inner product of loop quantum gravity via group field theory. Here we propose a new approach to tackle these issues stemming directly from the Holst action for general relativity, which is also a proper starting point for canonical loop quantum gravity. The discretization is performed by means of a ‘cubulation’ of the manifold rather than a triangulation. We give a direct interpretation of the resulting spin foam model as a generating functional for the n-point functions on the physical Hilbert space at finite regulator. This paper focuses on ideas and tasks to be performed before the model can be taken seriously. However, our analysis reveals some interesting features of this model: firstly, the structure of its amplitudes differs from the standard spin foam models. Secondly, the tetrad n-point functions admit a ‘Wick-like’ structure. Thirdly, the restriction to simple representations does not automatically occur—unless one makes use of the time gauge, just as in the classical theory.

  20. Improving Our Ability to Evaluate Underlying Mechanisms of Behavioral Onset and Other Event Occurrence Outcomes: A Discrete-Time Survival Mediation Model

    PubMed Central

    Fairchild, Amanda J.; Abara, Winston E.; Gottschall, Amanda C.; Tein, Jenn-Yun; Prinz, Ronald J.

    2015-01-01

    The purpose of this article is to introduce and describe a statistical model that researchers can use to evaluate underlying mechanisms of behavioral onset and other event occurrence outcomes. Specifically, the article develops a framework for estimating mediation effects with outcomes measured in discrete-time epochs by integrating the statistical mediation model with discrete-time survival analysis. The methodology has the potential to help strengthen health research by targeting prevention and intervention work more effectively as well as by improving our understanding of discretized periods of risk. The model is applied to an existing longitudinal data set to demonstrate its use, and programming code is provided to facilitate its implementation. PMID:24296470

  1. A Semiquantitative Framework for Gene Regulatory Networks: Increasing the Time and Quantitative Resolution of Boolean Networks

    PubMed Central

    Kerkhofs, Johan; Geris, Liesbet

    2015-01-01

    Boolean models have been instrumental in predicting general features of gene networks and more recently also as explorative tools in specific biological applications. In this study we introduce a basic quantitative and a limited time resolution to a discrete (Boolean) framework. Quantitative resolution is improved through the employ of normalized variables in unison with an additive approach. Increased time resolution stems from the introduction of two distinct priority classes. Through the implementation of a previously published chondrocyte network and T helper cell network, we show that this addition of quantitative and time resolution broadens the scope of biological behaviour that can be captured by the models. Specifically, the quantitative resolution readily allows models to discern qualitative differences in dosage response to growth factors. The limited time resolution, in turn, can influence the reachability of attractors, delineating the likely long term system behaviour. Importantly, the information required for implementation of these features, such as the nature of an interaction, is typically obtainable from the literature. Nonetheless, a trade-off is always present between additional computational cost of this approach and the likelihood of extending the model’s scope. Indeed, in some cases the inclusion of these features does not yield additional insight. This framework, incorporating increased and readily available time and semi-quantitative resolution, can help in substantiating the litmus test of dynamics for gene networks, firstly by excluding unlikely dynamics and secondly by refining falsifiable predictions on qualitative behaviour. PMID:26067297

  2. Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J.; Huang, H.; Deo, M.

    2016-03-01

    The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less

  3. Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Zhou; H. Huang; M. Deo

    The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less

  4. By-Pass Diode Temperature Tests of a Solar Array Coupon under Space Thermal Environment Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie; Wu, Gordon

    2016-01-01

    By-Pass diodes are a key design feature of solar arrays and system design must be robust against local heating, especially with implementation of larger solar cells. By-Pass diode testing was performed to aid thermal model development for use in future array designs that utilize larger cell sizes that result in higher string currents. Testing was performed on a 56-cell Advanced Triple Junction solar array coupon provided by SSL. Test conditions were vacuum with cold array backside using discrete by-pass diode current steps of 0.25 A ranging from 0 A to 2.0 A.

  5. A well-balanced finite volume scheme for the Euler equations with gravitation. The exact preservation of hydrostatic equilibrium with arbitrary entropy stratification

    NASA Astrophysics Data System (ADS)

    Käppeli, R.; Mishra, S.

    2016-03-01

    Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.

  6. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    PubMed

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  7. EIT forward problem parallel simulation environment with anisotropic tissue and realistic electrode models.

    PubMed

    De Marco, Tommaso; Ries, Florian; Guermandi, Marco; Guerrieri, Roberto

    2012-05-01

    Electrical impedance tomography (EIT) is an imaging technology based on impedance measurements. To retrieve meaningful insights from these measurements, EIT relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of current flows therein. The nonhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeoff between physical accuracy and technical feasibility, which at present severely limits the capabilities of EIT. This work presents a complete algorithmic flow for an accurate EIT modeling environment featuring high anatomical fidelity with a spatial resolution equal to that provided by an MRI and a novel realistic complete electrode model implementation. At the same time, we demonstrate that current graphics processing unit (GPU)-based platforms provide enough computational power that a domain discretized with five million voxels can be numerically modeled in about 30 s.

  8. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  9. Dendritic Connectivity, Heterogeneity, and Scaling in Urban Stormwater Networks: Implications for Socio-Hydrology

    NASA Astrophysics Data System (ADS)

    Mejia, A.; Jovanovic, T.; Hale, R. L.; Gironas, J. A.

    2017-12-01

    Urban stormwater networks (USNs) are unique dendritic (tree-like) structures that combine both artificial (e.g., swales and pipes) and natural (e.g., streams and wetlands) components. They are central to stream ecosystem structure and function in urban watersheds. The emphasis of conventional stormwater management, however, has been on localized, temporal impacts (e.g., changes to hydrographs at discrete locations), and the performance of individual stormwater control measures. This is the case even though control measures are implemented to prevent impacts on the USN. We develop a modeling approach to retrospectively study hydrological fluxes and states in USNs and apply the model to an urban watershed in Scottsdale, Arizona, USA. Using outputs from the model, we analyze over space and time the network properties of dendritic connectivity, heterogeneity, and scaling. Results show that as the network growth over time, due to increasing urbanization, it tends to become more homogenous in terms of topological features but increasingly heterogeneous in terms of dynamic features. We further use the modeling results to address socio-hydrological implications for USNs. We find that the adoption over time of evolving management strategies (e.g., widespread implementation of vegetated swales and retention ponds versus pipes) may be locally beneficial to the USN but benefits may not propagate systematically through the network. The latter can be reinforced by sudden, perhaps unintended, changes to the overall dendritic connectivity.

  10. Synchronization of autonomous objects in discrete event simulation

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1990-01-01

    Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.

  11. Fast mix table construction for material discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, S. R.

    2013-07-01

    An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less

  12. Fast Mix Table Construction for Material Discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Seth R

    2013-01-01

    An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less

  13. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    PubMed

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. A Discrete Events Delay Differential System Model for Transmission of Vancomycin-Resistant Enterococcus (VRE) in Hospitals

    DTIC Science & Technology

    2010-09-19

    estimated directly form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after...hospital infections , is used to motivate possibilities of modeling nosocomial infec- tion dynamics. This is done in the context of hospital monitoring and...model development. Key Words: Delay equations, discrete events, nosocomial infection dynamics, surveil- lance data, inverse problems, parameter

  15. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  16. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  17. Comparison of Computer Based Instruction to Behavior Skills Training for Teaching Staff Implementation of Discrete-Trial Instruction with an Adult with Autism

    ERIC Educational Resources Information Center

    Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…

  18. System/observer/controller identification toolbox

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Horta, Lucas G.; Phan, Minh

    1992-01-01

    System Identification is the process of constructing a mathematical model from input and output data for a system under testing, and characterizing the system uncertainties and measurement noises. The mathematical model structure can take various forms depending upon the intended use. The SYSTEM/OBSERVER/CONTROLLER IDENTIFICATION TOOLBOX (SOCIT) is a collection of functions, written in MATLAB language and expressed in M-files, that implements a variety of modern system identification techniques. For an open loop system, the central features of the SOCIT are functions for identification of a system model and its corresponding forward and backward observers directly from input and output data. The system and observers are represented by a discrete model. The identified model and observers may be used for controller design of linear systems as well as identification of modal parameters such as dampings, frequencies, and mode shapes. For a closed-loop system, an observer and its corresponding controller gain directly from input and output data.

  19. Progressive Failure of a Unidirectional Fiber-Reinforced Composite Using the Method of Cells: Discretization Objective Computational Results

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  20. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  1. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  2. On discrete control of nonlinear systems with applications to robotics

    NASA Technical Reports Server (NTRS)

    Eslami, Mansour

    1989-01-01

    Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed.

  3. Biochemical Network Stochastic Simulator (BioNetS): software for stochastic modeling of biochemical networks.

    PubMed

    Adalsteinsson, David; McMillen, David; Elston, Timothy C

    2004-03-08

    Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  4. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  6. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  7. Incorporating physically-based microstructures in materials modeling: Bridging phase field and crystal plasticity frameworks

    DOE PAGES

    Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; ...

    2016-04-25

    Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less

  8. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  9. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  10. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  11. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  12. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  13. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  14. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.

    PubMed

    Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime

    2017-10-01

    Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.

  15. Modelling Dowel Action of Discrete Reinforcing Bars in Cracked Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwan, A. K. H.; Ng, P. L.; Lam, J. Y. K.

    2010-05-21

    Dowel action is one of the component actions for shear force transfer in cracked reinforced concrete. In finite element analysis of concrete structures, the use of discrete representation of reinforcing bars is considered advantageous over the smeared representation due to the relative ease of modelling the bond-slip behaviour. However, there is very limited research on how to simulate the dowel action of discrete reinforcing bars. Herein, a numerical model for dowel action of discrete reinforcing bars crossing cracks in concrete is developed. The model features the derivation of dowel stiffness matrix based on beam-on-elastic-foundation theory and the direct assemblage ofmore » dowel stiffness into the concrete element stiffness matrices. The dowel action model is incorporated in a nonlinear finite element programme with secant stiffness formulation. Deep beams tested in the literature are analysed and it is found that the incorporation of dowel action model improves the accuracy of analysis.« less

  16. Mind the Gap: A Semicontinuum Model for Discrete Electrical Propagation in Cardiac Tissue.

    PubMed

    Costa, Caroline Mendonca; Silva, Pedro Andre Arroyo; dos Santos, Rodrigo Weber

    2016-04-01

    Electrical propagation in cardiac tissue is a discrete or discontinuous phenomenon that reflects the complexity of the anatomical structures and their organization in the heart, such as myocytes, gap junctions, microvessels, and extracellular matrix, just to name a few. Discrete models or microscopic and discontinuous models are, so far, the best options to accurately study how structural properties of cardiac tissue influence electrical propagation. These models are, however, inappropriate in the context of large scale simulations, which have been traditionally performed by the use of continuum and macroscopic models, such as the monodomain and the bidomain models. However, continuum models may fail to reproduce many important physiological and physiopathological aspects of cardiac electrophysiology, for instance, those related to slow conduction. In this study, we develop a new mathematical model that combines characteristics of both continuum and discrete models. The new model was evaluated in scenarios of low gap-junctional coupling, where slow conduction is observed, and was able to reproduce conduction block, increase of the maximum upstroke velocity and of the repolarization dispersion. None of these features can be captured by continuum models. In addition, the model overcomes a great disadvantage of discrete models, as it allows variation of the spatial resolution within a certain range.

  17. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  18. Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.

    PubMed

    Apostolou, N; Papazoglou, Th; Koutsouris, D

    2006-01-01

    Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.

  19. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  20. Two-dimensional wavelet transform feature extraction for porous silicon chemical sensors.

    PubMed

    Murguía, José S; Vergara, Alexander; Vargas-Olmos, Cecilia; Wong, Travis J; Fonollosa, Jordi; Huerta, Ramón

    2013-06-27

    Designing reliable, fast responding, highly sensitive, and low-power consuming chemo-sensory systems has long been a major goal in chemo-sensing. This goal, however, presents a difficult challenge because having a set of chemo-sensory detectors exhibiting all these aforementioned ideal conditions are still largely un-realizable to-date. This paper presents a unique perspective on capturing more in-depth insights into the physicochemical interactions of two distinct, selectively chemically modified porous silicon (pSi) film-based optical gas sensors by implementing an innovative, based on signal processing methodology, namely the two-dimensional discrete wavelet transform. Specifically, the method consists of using the two-dimensional discrete wavelet transform as a feature extraction method to capture the non-stationary behavior from the bi-dimensional pSi rugate sensor response. Utilizing a comprehensive set of measurements collected from each of the aforementioned optically based chemical sensors, we evaluate the significance of our approach on a complex, six-dimensional chemical analyte discrimination/quantification task problem. Due to the bi-dimensional aspects naturally governing the optical sensor response to chemical analytes, our findings provide evidence that the proposed feature extractor strategy may be a valuable tool to deepen our understanding of the performance of optically based chemical sensors as well as an important step toward attaining their implementation in more realistic chemo-sensing applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  2. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE PAGES

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; ...

    2016-04-22

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  3. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  4. Reversibility and measurement in quantum computing

    NASA Astrophysics Data System (ADS)

    Leãao, J. P.

    1998-03-01

    The relation between computation and measurement at a fundamental physical level is yet to be understood. Rolf Landauer was perhaps the first to stress the strong analogy between these two concepts. His early queries have regained pertinence with the recent efforts to developed realizable models of quantum computers. In this context the irreversibility of quantum measurement appears in conflict with the requirement of reversibility of the overall computation associated with the unitary dynamics of quantum evolution. The latter in turn is responsible for the features of superposition and entanglement which make some quantum algorithms superior to classical ones for the same task in speed and resource demand. In this article we advocate an approach to this question which relies on a model of computation designed to enforce the analogy between the two concepts instead of demarcating them as it has been the case so far. The model is introduced as a symmetrization of the classical Turing machine model and is then carried on to quantum mechanics, first as a an abstract local interaction scheme (symbolic measurement) and finally in a nonlocal noninteractive implementation based on Aharonov-Bohm potentials and modular variables. It is suggested that this implementation leads to the most ubiquitous of quantum algorithms: the Discrete Fourier Transform.

  5. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE PAGES

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...

    2017-12-20

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  6. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  7. Structure-Preserving Variational Multiscale Modeling of Turbulent Incompressible Flow with Subgrid Vortices

    NASA Astrophysics Data System (ADS)

    Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey

    2017-11-01

    In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.

  8. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  9. AFM feature definition for neural cells on nanofibrillar tissue scaffolds.

    PubMed

    Tiryaki, Volkan M; Khan, Adeel A; Ayres, Virginia M

    2012-01-01

    A diagnostic approach is developed and implemented that provides clear feature definition in atomic force microscopy (AFM) images of neural cells on nanofibrillar tissue scaffolds. Because the cellular edges and processes are on the same order as the background nanofibers, this imaging situation presents a feature definition problem. The diagnostic approach is based on analysis of discrete Fourier transforms of standard AFM section measurements. The diagnostic conclusion that the combination of dynamic range enhancement with low-frequency component suppression enhances feature definition is shown to be correct and to lead to clear-featured images that could change previously held assumptions about the cell-cell interactions present. Clear feature definition of cells on scaffolds extends the usefulness of AFM imaging for use in regenerative medicine. © Wiley Periodicals, Inc.

  10. Bayesian Estimation and Inference Using Stochastic Electronics

    PubMed Central

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  11. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  12. Modelling unsaturated/saturated flow in weathered profiles

    NASA Astrophysics Data System (ADS)

    Ireson, A. M.; Ali, M. A.; Van Der Kamp, G.

    2016-12-01

    Vertical weathering profiles are a common feature of many geological materials, where the fracture or macropore porosity decreases progressively below the ground surface. The weathered near surface zone (WNSZ) has an enhanced storage and permeability. When the water table is deep, the WNSZ can act to buffer recharge. When the water table is shallow, intersecting the WNSZ, transmissivity and lateral saturated flow, increase with increasing water table elevation. Such a situation exists in the glacial till dominated landscapes of the Canadian prairies, effectively resulting in dynamic patterns of subsurface connectivity. Using dual permeability hydraulic properties with vertically scaled macroporosity, we show how the WNSZ can be represented in models. The resulting model can be more parsimonious than an equivalent model with two or more discrete layers, and more physically realistic. We implement our model in PARFLOW-CLM, and apply the model to a field site in the Canadian prairies. We are able to convincingly simulate shallow groundwater dynamics, and spatio-temporal patterns of groundwater connectivity.

  13. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.

  14. Discrete rational and breather solution in the spatial discrete complex modified Korteweg-de Vries equation and continuous counterparts.

    PubMed

    Zhao, Hai-Qiong; Yu, Guo-Fu

    2017-04-01

    In this paper, a spatial discrete complex modified Korteweg-de Vries equation is investigated. The Lax pair, conservation laws, Darboux transformations, and breather and rational wave solutions to the semi-discrete system are presented. The distinguished feature of the model is that the discrete rational solution can possess new W-shape rational periodic-solitary waves that were not reported before. In addition, the first-order rogue waves reach peak amplitudes which are at least three times of the background amplitude, whereas their continuous counterparts are exactly three times the constant background. Finally, the integrability of the discrete system, including Lax pair, conservation laws, Darboux transformations, and explicit solutions, yields the counterparts of the continuous system in the continuum limit.

  15. Modeling of brittle-viscous flow using discrete particles

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Øystein; Barabasch, Jessica; Virgo, Simon; Souche, Alban; Galland, Olivier; Mair, Karen; Abe, Steffen; Urai, Janos L.

    2017-04-01

    Many geological processes involve both viscous flow and brittle fractures, e.g. boudinage, folding and magmatic intrusions. Numerical modeling of such viscous-brittle materials poses challenges: one has to account for the discrete fracturing, the continuous viscous flow, the coupling between them, and potential pressure dependence of the flow. The Discrete Element Method (DEM) is a numerical technique, widely used for studying fracture of geomaterials. However, the implementation of viscous fluid flow in discrete element models is not trivial. In this study, we model quasi-viscous fluid flow behavior using Esys-Particle software (Abe et al., 2004). We build on the methodology of Abe and Urai (2012) where a combination of elastic repulsion and dashpot interactions between the discrete particles is implemented. Several benchmarks are presented to illustrate the material properties. Here, we present extensive, systematic material tests to characterize the rheology of quasi-viscous DEM particle packing. We present two tests: a simple shear test and a channel flow test, both in 2D and 3D. In the simple shear tests, simulations were performed in a box, where the upper wall is moved with a constant velocity in the x-direction, causing shear deformation of the particle assemblage. Here, the boundary conditions are periodic on the sides, with constant forces on the upper and lower walls. In the channel flow tests, a piston pushes a sample through a channel by Poisseuille flow. For both setups, we present the resulting stress-strain relationships over a range of material parameters, confining stress and strain rate. Results show power-law dependence between stress and strain rate, with a non-linear dependence on confining force. The material is strain softening under some conditions (which). Additionally, volumetric strain can be dilatant or compactant, depending on porosity, confining pressure and strain rate. Constitutive relations are implemented in a way that limits the range of viscosities. For identical pressure and strain rate, an order of magnitude range in viscosity can be investigated. The extensive material testing indicates that DEM particles interacting by a combination of elastic repulsion and dashpots can be used to model viscous flows. This allows us to exploit the fracturing capabilities of the discrete element methods and study systems that involve both viscous flow and brittle fracturing. However, the small viscosity range achievable using this approach does constraint the applicability for systems where larger viscosity ranges are required, such as folding of viscous layers of contrasting viscosities. References: Abe, S., Place, D., & Mora, P. (2004). A parallel implementation of the lattice solid model for the simulation of rock mechanics and earthquake dynamics. PAGEOPH, 161(11-12), 2265-2277. http://doi.org/10.1007/s00024-004-2562-x Abe, S., and J. L. Urai (2012), Discrete element modeling of boudinage: Insights on rock rheology, matrix flow, and evolution of geometry, JGR., 117, B01407, doi:10.1029/2011JB00855

  16. Learning may need only a few bits of synaptic precision

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-05-01

    Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.

  17. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  18. StochKit2: software for discrete stochastic simulation of biochemical systems with events.

    PubMed

    Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R

    2011-09-01

    StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.

  19. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  20. A Discrete Velocity Kinetic Model with Food Metric: Chemotaxis Traveling Waves.

    PubMed

    Choi, Sun-Ho; Kim, Yong-Jung

    2017-02-01

    We introduce a mesoscopic scale chemotaxis model for traveling wave phenomena which is induced by food metric. The organisms of this simplified kinetic model have two discrete velocity modes, [Formula: see text] and a constant tumbling rate. The main feature of the model is that the speed of organisms is constant [Formula: see text] with respect to the food metric, not the Euclidean metric. The uniqueness and the existence of the traveling wave solution of the model are obtained. Unlike the classical logarithmic model case there exist traveling waves under super-linear consumption rates and infinite population pulse-type traveling waves are obtained. Numerical simulations are also provided.

  1. Hybrid discrete/continuum algorithms for stochastic reaction networks

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  3. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Enhanced robust finite-time passivity for Markovian jumping discrete-time BAM neural networks with leakage delay.

    PubMed

    Sowmiya, C; Raja, R; Cao, Jinde; Rajchakit, G; Alsaedi, Ahmed

    2017-01-01

    This paper is concerned with the problem of enhanced results on robust finite-time passivity for uncertain discrete-time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov-Krasovskii functional candidate, the reciprocally convex combination method together with linear matrix inequality technique, several sufficient conditions are derived for varying the passivity of discrete-time BAM neural networks. An important feature presented in our paper is that we utilize the reciprocally convex combination lemma in the main section and the relevance of that lemma arises from the derivation of stability by using Jensen's inequality. Further, the zero inequalities help to propose the sufficient conditions for finite-time boundedness and passivity for uncertainties. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.

  5. Metric integration architecture for product development

    NASA Astrophysics Data System (ADS)

    Sieger, David B.

    1997-06-01

    Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.

  6. There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation

    ERIC Educational Resources Information Center

    McKinney, Tracy; Vasquez, Eleazar, III.

    2014-01-01

    Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…

  7. Increasing the value of geospatial informatics with open approaches for Big Data

    NASA Astrophysics Data System (ADS)

    Percivall, G.; Bermudez, L. E.

    2017-12-01

    Open approaches to big data provide geoscientists with new capabilities to address problems of unmatched size and complexity. Consensus approaches for Big Geo Data have been addressed in multiple international workshops and testbeds organized by the Open Geospatial Consortium (OGC) in the past year. Participants came from government (NASA, ESA, USGS, NOAA, DOE); research (ORNL, NCSA, IU, JPL, CRIM, RENCI); industry (ESRI, Digital Globe, IBM, rasdaman); standards (JTC 1/NIST); and open source software communities. Results from the workshops and testbeds are documented in Testbed reports and a White Paper published by the OGC. The White Paper identifies the following set of use cases: Collection and Ingest: Remote sensed data processing; Data stream processing Prepare and Structure: SQL and NoSQL databases; Data linking; Feature identification Analytics and Visualization: Spatial-temporal analytics; Machine Learning; Data Exploration Modeling and Prediction: Integrated environmental models; Urban 4D models. Open implementations were developed in the Arctic Spatial Data Pilot using Discrete Global Grid Systems (DGGS) and in Testbeds using WPS and ESGF to publish climate predictions. Further development activities to advance open implementations of Big Geo Data include the following: Open Cloud Computing: Avoid vendor lock-in through API interoperability and Application portability. Open Source Extensions: Implement geospatial data representations in projects from Apache, Location Tech, and OSGeo. Investigate parallelization strategies for N-Dimensional spatial data. Geospatial Data Representations: Schemas to improve processing and analysis using geospatial concepts: Features, Coverages, DGGS. Use geospatial encodings like NetCDF and GeoPackge. Big Linked Geodata: Use linked data methods scaled to big geodata. Analysis Ready Data: Support "Download as last resort" and "Analytics as a service". Promote elements common to "datacubes."

  8. Modelling the Preferences of Students for Alternative Assignment Designs Using the Discrete Choice Experiment Methodology

    ERIC Educational Resources Information Center

    Kennelly, Brendan; Flannery, Darragh; Considine, John; Doherty, Edel; Hynes, Stephen

    2014-01-01

    This paper outlines how a discrete choice experiment (DCE) can be used to learn more about how students are willing to trade off various features of assignments such as the nature and timing of feedback and the method used to submit assignments. A DCE identifies plausible levels of the key attributes of a good or service and then presents the…

  9. A 2D-3D strategy for resolving tsunami-generated debris flow in urban environments

    NASA Astrophysics Data System (ADS)

    Birjukovs Canelas, Ricardo; Conde, Daniel; Garcia-Feal, Orlando; João Telhado, Maria; Ferreira, Rui M. L.

    2017-04-01

    The incorporation of solids, either sediment from the natural environment or remains from buildings or infrastructures is a relevant feature of tsunami run-up in urban environments, greatly increasing the destructive potential of tsunami propagation. Two-dimensional (2D) models have been used to assess the propagation of the bore, even in dense urban fronts. Computational advances are introduced in this work, namely a fully lagrangian, 3D description of the fluid-solid flow, coupled with a high performance meshless implementation capable of dealing with large domains and fine discretizations. A Smoothed Particle Hydrodynamics (SPH) Navier-Stokes discretization and a Distributed Contact Discrete Element Method (DCDEM) description of solid-solid interactions provide a state-of the-art fluid-solid flow description. Together with support for arbitrary geometries, centimetre scale resolution simulations of a city section in Lisbon downtown are presented. 2D results are used as boundary conditions for the 3D model, characterizing the incoming wave as it approaches the coast. It is shown that the incoming bore is able to mobilize and incorporate standing vehicles and other urban hardware. Such fully featured simulation provides explicit description of the interactions among fluid, floating debris (vehicles and urban furniture), the buildings and the pavement. The proposed model presents both an innovative research tool for the study of these flows and a powerful and robust approach to study, design and test mitigation solutions at the local scale. At the same time, due to the high time and space resolution of these methodologies, new questions are raised: scenario-building and initial configurations play a crucial role but they do not univocally determine the final configuration of the simulation, as the solution of the Navier-Stokes equations for high Reynolds numbers possesses a high number of degrees of freedom. This calls for conducting the simulations in a statistical framework, involving both initial conditions generation and interpretation of results, which is only attainable under very high standards of computational efficiency. This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT).

  10. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  11. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  12. Extended optical model for fission

    DOE PAGES

    Sin, M.; Capote, R.; Herman, M. W.; ...

    2016-03-07

    A comprehensive formalism to calculate fission cross sections based on the extension of the optical model for fission is presented. It can be used for description of nuclear reactions on actinides featuring multi-humped fission barriers with partial absorption in the wells and direct transmission through discrete and continuum fission channels. The formalism describes the gross fluctuations observed in the fission probability due to vibrational resonances, and can be easily implemented in existing statistical reaction model codes. The extended optical model for fission is applied for neutron induced fission cross-section calculations on 234,235,238U and 239Pu targets. A triple-humped fission barrier ismore » used for 234,235U(n,f), while a double-humped fission barrier is used for 238U(n,f) and 239Pu(n,f) reactions as predicted by theoretical barrier calculations. The impact of partial damping of class-II/III states, and of direct transmission through discrete and continuum fission channels, is shown to be critical for a proper description of the measured fission cross sections for 234,235,238U(n,f) reactions. The 239Pu(n,f) reaction can be calculated in the complete damping approximation. Calculated cross sections for 235,238U(n,f) and 239Pu(n,f) reactions agree within 3% with the corresponding cross sections derived within the Neutron Standards least-squares fit of available experimental data. Lastly, the extended optical model for fission can be used for both theoretical fission studies and nuclear data evaluation.« less

  13. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  14. Experimental confirmation of a PDE-based approach to design of feedback controls

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Silcox, R. J.; Metcalf, Vern L.

    1995-01-01

    Issues regarding the experimental implementation of partial differential equation based controllers are discussed in this work. While the motivating application involves the reduction of vibration levels for a circular plate through excitation of surface-mounted piezoceramic patches, the general techniques described here will extend to a variety of applications. The initial step is the development of a PDE model which accurately captures the physics of the underlying process. This model is then discretized to yield a vector-valued initial value problem. Optimal control theory is used to determine continuous-time voltages to the patches, and the approximations needed to facilitate discrete time implementation are addressed. Finally, experimental results demonstrating the control of both transient and steady state vibrations through these techniques are presented.

  15. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  16. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less

  17. Robustness of Radiomic Features in [11C]Choline and [18F]FDG PET/CT Imaging of Nasopharyngeal Carcinoma: Impact of Segmentation and Discretization.

    PubMed

    Lu, Lijun; Lv, Wenbing; Jiang, Jun; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2016-12-01

    Radiomic features are increasingly utilized to evaluate tumor heterogeneity in PET imaging and to enable enhanced prediction of therapy response and outcome. An important ingredient to success in translation of radiomic features to clinical reality is to quantify and ascertain their robustness. In the present work, we studied the impact of segmentation and discretization on 88 radiomic features in 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and [ 11 C]methyl-choline ([ 11 C]choline) positron emission tomography/X-ray computed tomography (PET/CT) imaging of nasopharyngeal carcinoma. Forty patients underwent [ 18 F]FDG PET/CT scans. Of these, nine patients were imaged on a different day utilizing [ 11 C]choline PET/CT. Tumors were delineated using reference manual segmentation by the consensus of three expert physicians, using 41, 50, and 70 % maximum standardized uptake value (SUV max ) threshold with background correction, Nestle's method, and watershed and region growing methods, and then discretized with fixed bin size (0.05, 0.1, 0.2, 0.5, and 1) in units of SUV. A total of 88 features, including 21 first-order intensity features, 10 shape features, and 57 second- and higher-order textural features, were extracted from the tumors. The robustness of the features was evaluated via the intraclass correlation coefficient (ICC) for seven kinds of segmentation methods (involving all 88 features) and five kinds of discretization bin size (involving the 57 second- and higher-order features). Forty-four (50 %) and 55 (63 %) features depicted ICC ≥0.8 with respect to segmentation as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Thirteen (23 %) and 12 (21 %) features showed ICC ≥0.8 with respect to discretization as obtained from [ 18 F]FDG and [ 11 C]choline, respectively. Six features were obtained from both [ 18 F]FDG and [ 11 C]choline having ICC ≥0.8 for both segmentation and discretization, five of which were gray-level co-occurrence matrix (GLCM) features (SumEntropy, Entropy, DifEntropy, Homogeneity1, and Homogeneity2) and one of which was an neighborhood gray-tone different matrix (NGTDM) feature (Coarseness). Discretization generated larger effects on features than segmentation in both tracers. Features extracted from [ 11 C]choline were more robust than [ 18 F]FDG for segmentation. Discretization had very similar effects on features extracted from both tracers.

  18. Training Parent Implementation of Discrete-Trial Teaching: Effects on Generalization of Parent Teaching and Child Correct Responding

    ERIC Educational Resources Information Center

    Lafasakis, Michael; Sturmey, Peter

    2007-01-01

    Behavioral skills training was used to teach 3 parents to implement discrete-trial teaching with their children with developmental disabilities. Parents learned to implement discrete-trial training, their skills generalized to novel programs, and the children's correct responding increased, suggesting that behavioral skills training is an…

  19. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  20. Discrete Time Crystals: Rigidity, Criticality, and Realizations.

    PubMed

    Yao, N Y; Potter, A C; Potirniche, I-D; Vishwanath, A

    2017-01-20

    Despite being forbidden in equilibrium, spontaneous breaking of time translation symmetry can occur in periodically driven, Floquet systems with discrete time-translation symmetry. The period of the resulting discrete time crystal is quantized to an integer multiple of the drive period, arising from a combination of collective synchronization and many body localization. Here, we consider a simple model for a one-dimensional discrete time crystal which explicitly reveals the rigidity of the emergent oscillations as the drive is varied. We numerically map out its phase diagram and compute the properties of the dynamical phase transition where the time crystal melts into a trivial Floquet insulator. Moreover, we demonstrate that the model can be realized with current experimental technologies and propose a blueprint based upon a one dimensional chain of trapped ions. Using experimental parameters (featuring long-range interactions), we identify the phase boundaries of the ion-time-crystal and propose a measurable signature of the symmetry breaking phase transition.

  1. CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data

    PubMed Central

    Weiss, Scott T.

    2014-01-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com. PMID:24922310

  2. CGBayesNets: conditional Gaussian Bayesian network learning and inference with mixed discrete and continuous data.

    PubMed

    McGeachie, Michael J; Chang, Hsun-Hsien; Weiss, Scott T

    2014-06-01

    Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.

  3. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Hysteretic Models Considering Axial-Shear-Flexure Interaction

    NASA Astrophysics Data System (ADS)

    Ceresa, Paola; Negrisoli, Giorgio

    2017-10-01

    Most of the existing numerical models implemented in finite element (FE) software, at the current state of the art, are not capable to describe, with enough reliability, the interaction between axial, shear and flexural actions under cyclic loading (e.g. seismic actions), neglecting crucial effects for predicting the nature of the collapse of reinforced concrete (RC) structural elements. Just a few existing 3D volume models or fibre beam models can lead to a quite accurate response, but they are still computationally inefficient for typical applications in earthquake engineering and also characterized by very complex formulation. Thus, discrete models with lumped plasticity hinges may be the preferred choice for modelling the hysteretic behaviour due to cyclic loading conditions, in particular with reference to its implementation in a commercial software package. These considerations lead to this research work focused on the development of a model for RC beam-column elements able to consider degradation effects and interaction between the actions under cyclic loading conditions. In order to develop a model for a general 3D discrete hinge element able to take into account the axial-shear-flexural interaction, it is necessary to provide an implementation which involves a corrector-predictor iterative scheme. Furthermore, a reliable constitutive model based on damage plasticity theory is formulated and implemented for its numerical validation. Aim of this research work is to provide the formulation of a numerical model, which will allow implementation within a FE software package for nonlinear cyclic analysis of RC structural members. The developed model accounts for stiffness degradation effect and stiffness recovery for loading reversal.

  5. 75 FR 15648 - Approval and Promulgation of Air Quality Implementation Plans; Texas; Revisions to the Discrete...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-30

    ... Promulgation of Air Quality Implementation Plans; Texas; Revisions to the Discrete Emission Credit Banking and... Air Quality Rules, Subchapter H--Emissions Banking and Trading, Division 4--Discrete Emission Credit Banking and Trading, referred to elsewhere in this notice as the Discrete Emission Reduction Credit (DERC...

  6. 75 FR 27644 - Approval and Promulgation of Air Quality Implementation Plans; Texas; Revisions to the Discrete...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... Promulgation of Air Quality Implementation Plans; Texas; Revisions to the Discrete Emission Credit Banking and... Rules, Subchapter H--Emissions Banking and Trading, Division 4--Discrete Emission Credit Banking and Trading, referred to elsewhere in this notice as the Discrete Emission Reduction Credit (DERC) Program...

  7. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  8. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    PubMed

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.

  9. Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.

    PubMed

    Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung

    2010-11-01

    Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.

  10. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  11. An Embedded 3D Fracture Modeling Approach for Simulating Fracture-Dominated Fluid Flow and Heat Transfer in Geothermal Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Henry; Wang, Cong; Winterfeld, Philip

    An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less

  12. Design, implementation and application of distributed order PI control.

    PubMed

    Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan

    2013-05-01

    In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  13. State-and-transition simulation models: a framework for forecasting landscape change

    USGS Publications Warehouse

    Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée

    2016-01-01

    SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.

  14. Discrete Time-Crystalline Order in Cavity and Circuit QED Systems

    NASA Astrophysics Data System (ADS)

    Gong, Zongping; Hamazaki, Ryusuke; Ueda, Masahito

    2018-01-01

    Discrete time crystals are a recently proposed and experimentally observed out-of-equilibrium dynamical phase of Floquet systems, where the stroboscopic dynamics of a local observable repeats itself at an integer multiple of the driving period. We address this issue in a driven-dissipative setup, focusing on the modulated open Dicke model, which can be implemented by cavity or circuit QED systems. In the thermodynamic limit, we employ semiclassical approaches and find rich dynamical phases on top of the discrete time-crystalline order. In a deep quantum regime with few qubits, we find clear signatures of a transient discrete time-crystalline behavior, which is absent in the isolated counterpart. We establish a phenomenology of dissipative discrete time crystals by generalizing the Landau theory of phase transitions to Floquet open systems.

  15. Computational prediction of the refinement of oxide agglomerates in a physical conditioning process for molten aluminium alloy

    NASA Astrophysics Data System (ADS)

    Tong, M.; Jagarlapudi, S. C.; Patel, J. B.; Stone, I. C.; Fan, Z.; Browne, D. J.

    2015-06-01

    Physically conditioning molten scrap aluminium alloys using high shear processing (HSP) was recently found to be a promising technology for purification of contaminated alloys. HSP refines the solid oxide agglomerates in molten alloys, so that they can act as sites for the nucleation of Fe-rich intermetallic phases which can subsequently be removed by the downstream de-drossing process. In this paper, a computational modelling for predicting the evolution of size of oxide clusters during HSP is presented. We used CFD to predict the macroscopic flow features of the melt, and the resultant field predictions of temperature and melt shear rate were transferred to a population balance model (PBM) as its key inputs. The PBM is a macroscopic model that formulates the microscopic agglomeration and breakage of a population of a dispersed phase. Although it has been widely used to study conventional deoxidation of liquid metal, this is the first time that PBM has been used to simulate the melt conditioning process within a rotor/stator HSP device. We employed a method which discretizes the continuous profile of size of the dispersed phase into a collection of discrete bins of size, to solve the governing population balance equation for the size of agglomerates. A finite volume method was used to solve the continuity equation, the energy equation and the momentum equation. The overall computation was implemented mainly using the FLUENT module of ANSYS. The simulations showed that there is a relatively high melt shear rate between the stator and sweeping tips of the rotor blades. This high shear rate leads directly to significant fragmentation of the initially large oxide aggregates. Because the process of agglomeration is significantly slower than the breakage processes at the beginning of HSP, the mean size of oxide clusters decreases very rapidly. As the process of agglomeration gradually balances the process of breakage, the mean size of oxide clusters converges to a steady value. The model enables formulation of the quantitative relationship between the macroscopic flow features of liquid metal and the change of size of dispersed oxide clusters, during HSP. It predicted the variation in size of the dispersed phased with operational parameters (including the geometry and, particularly, the speed of the rotor), which is of direct use to experimentalists optimising the design of the HSP device and its implementation.

  16. Feature extraction using extrema sampling of discrete derivatives for spike sorting in implantable upper-limb neural prostheses.

    PubMed

    Zamani, Majid; Demosthenous, Andreas

    2014-07-01

    Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.

  17. Models for twistable elastic polymers in Brownian dynamics, and their implementation for LAMMPS.

    PubMed

    Brackley, C A; Morozov, A N; Marenduzzo, D

    2014-04-07

    An elastic rod model for semi-flexible polymers is presented. Theory for a continuum rod is reviewed, and it is shown that a popular discretised model used in numerical simulations gives the correct continuum limit. Correlation functions relating to both bending and twisting of the rod are derived for both continuous and discrete cases, and results are compared with numerical simulations. Finally, two possible implementations of the discretised model in the multi-purpose molecular dynamics software package LAMMPS are described.

  18. Gaussian quadrature and lattice discretization of the Fermi-Dirac distribution for graphene.

    PubMed

    Oettinger, D; Mendoza, M; Herrmann, H J

    2013-07-01

    We construct a lattice kinetic scheme to study electronic flow in graphene. For this purpose, we first derive a basis of orthogonal polynomials, using as the weight function the ultrarelativistic Fermi-Dirac distribution at rest. Later, we use these polynomials to expand the respective distribution in a moving frame, for both cases, undoped and doped graphene. In order to discretize the Boltzmann equation and make feasible the numerical implementation, we reduce the number of discrete points in momentum space to 18 by applying a Gaussian quadrature, finding that the family of representative wave (2+1)-vectors, which satisfies the quadrature, reconstructs a honeycomb lattice. The procedure and discrete model are validated by solving the Riemann problem, finding excellent agreement with other numerical models. In addition, we have extended the Riemann problem to the case of different dopings, finding that by increasing the chemical potential the electronic fluid behaves as if it increases its effective viscosity.

  19. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  20. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  1. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    NASA Astrophysics Data System (ADS)

    Petsev, Nikolai D.; Leal, L. Gary; Shell, M. Scott

    2017-12-01

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely resolved (e.g., molecular dynamics) and coarse-grained (e.g., continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 084115 (2016)], simulated using a particle-based continuum method known as smoothed dissipative particle dynamics. An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.

  2. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    NASA Astrophysics Data System (ADS)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  3. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  4. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  5. Input-output identification of controlled discrete manufacturing systems

    NASA Astrophysics Data System (ADS)

    Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques

    2014-03-01

    The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.

  6. A strategic planning approach for operational-environmental tradeoff assessments in terminal areas

    NASA Astrophysics Data System (ADS)

    Jimenez, Hernando

    This thesis proposes the use of well established statistical analysis techniques, leveraging on recent developments in interactive data visualization capabilities, to quantitatively characterize the interactions, sensitivities, and tradeoffs prevalent in the complex behavior of airport operational and environmental performance. Within the strategic airport planning process, this approach is used in the assessment of airport performance under current/reference conditions, as well as in the evaluation of terminal area solutions under projected demand conditions. More specifically, customized designs of experiments are utilized to guide the intelligent selection and definition of modeling and simulation runs that will yield greater understanding, insight, and information about the inherent systemic complexity of a terminal area, with minimal computational expense. For the research documented in this thesis, a modeling and simulation environment was created featuring three primary components. First, a generator of schedules of operations, based primarily on previous work on aviation demand characterization, whereby growth factors and scheduling adjustment algorithms are applied on appropriate baseline schedules so as to generate notional operational sets representative of consistent future demand conditions. The second component pertains to the modeling and simulation of aircraft operations, defined by a schedule of operations, on the airport surface and within its terminal airspace. This component is a discrete event simulator for multiple queuing models that captures the operational architecture of the entire terminal area along with all the necessary operational logic pertaining to simulated Air Traffic Control (ATC) functions, rules, and standard practices. The third and final component is comprised of legacy aircraft performance, emissions and dispersion, and noise exposure modeling tools, that use the simulation history of aircraft movements to generate estimates of fuel burn, emissions, and noise. The implementation of the proposed approach for the assessment of terminal area solutions incorporates the use of discrete response surface equations, and eliminates the use of quadratic terms that have no practical significance in this context. Rather, attention is entire placed on the main effects of different terminal area solutions, namely additional airport infrastructure, operational improvements, and advanced aircraft concepts, modeled as discrete independent variables for the regression model. Results reveal that an additional runway and a new international terminal, as well as reduced aircraft separation, have a major effect on all operational metrics of interest. In particular, the additional runway has a dominant effect for departure delay metrics and gate hold periods, with moderate interactions with respect to separation reduction. On the other hand, operational metrics for arrivals are co-dependent on additional infrastructure and separation reduction, featuring marginal improvements whenever these two solutions are implemented in isolation, but featuring a dramatic compounding effect when implemented in combination. The magnitude of these main effects for departures and of the interaction between these solutions for arrivals is confirmed through appropriate statistical significance testing. Finally, the inclusion o advanced aircraft concepts is shown to be most beneficial for airborne arrival operations and to a lesser extent for arrival ground movements. More specifically, advanced aircraft concepts were found to be primarily responsible for reductions in volatile organic compounds, unburned hydrocarbons, and particulate matter in this flight regime, but featured relevant interactions with separation reduction and additional airport infrastructure. To address the selection of scenarios for strategic airport planning, a technique for risk-based scenario construction, evaluation, and selection is proposed, incorporating n-dimensional dependence tree probability approximations into a morphological analysis approach. This approach to scenario construction and downselection is a distinct and novel contribution to the scenario planning field as it provides a mathematically and explicitly testable definition for an H parameter, contrasting with the qualitative alternatives in the current state of the art, which can be used in morphological analysis for scenario construction and downselection. By demonstrating that dependence tree probability product approximations are an adequate aggregation function, probability can be used for scenario construction and downselection without any mathematical or methodological restriction on the resolution of the probability scale or the number of morphological alternatives that have previously plagued probabilization and scenario downselection approaches. In addition, this approach requires expert input elicitation that is comparable or less than the current state of the art practices. (Abstract shortened by UMI.)

  7. Hydraulic Fracturing and Production Optimization in Eagle Ford Shale Using Coupled Geomechanics and Fluid Flow Model

    NASA Astrophysics Data System (ADS)

    Suppachoknirun, Theerapat; Tutuncu, Azra N.

    2017-12-01

    With increasing production from shale gas and tight oil reservoirs, horizontal drilling and multistage hydraulic fracturing processes have become a routine procedure in unconventional field development efforts. Natural fractures play a critical role in hydraulic fracture growth, subsequently affecting stimulated reservoir volume and the production efficiency. Moreover, the existing fractures can also contribute to the pressure-dependent fluid leak-off during the operations. Hence, a reliable identification of the discrete fracture network covering the zone of interest prior to the hydraulic fracturing design needs to be incorporated into the hydraulic fracturing and reservoir simulations for realistic representation of the in situ reservoir conditions. In this research study, an integrated 3-D fracture and fluid flow model have been developed using a new approach to simulate the fluid flow and deliver reliable production forecasting in naturally fractured and hydraulically stimulated tight reservoirs. The model was created with three key modules. A complex 3-D discrete fracture network model introduces realistic natural fracture geometry with the associated fractured reservoir characteristics. A hydraulic fracturing model is created utilizing the discrete fracture network for simulation of the hydraulic fracture and flow in the complex discrete fracture network. Finally, a reservoir model with the production grid system is used allowing the user to efficiently perform the fluid flow simulation in tight formations with complex fracture networks. The complex discrete natural fracture model, the integrated discrete fracture model for the hydraulic fracturing, the fluid flow model, and the input dataset have been validated against microseismic fracture mapping and commingled production data obtained from a well pad with three horizontal production wells located in the Eagle Ford oil window in south Texas. Two other fracturing geometries were also evaluated to optimize the cumulative production and for the three wells individually. Significant reduction in the production rate in early production times is anticipated in tight reservoirs regardless of the fracturing techniques implemented. The simulations conducted using the alternating fracturing technique led to more oil production than when zipper fracturing was used for a 20-year production period. Yet, due to the decline experienced, the differences in cumulative production get smaller, and the alternating fracturing is not practically implementable while field application of zipper fracturing technique is more practical and widely used.

  8. Decoherence and discrete symmetries in deformed relativistic kinematics

    NASA Astrophysics Data System (ADS)

    Arzano, Michele

    2018-01-01

    Models of deformed Poincaré symmetries based on group valued momenta have long been studied as effective modifications of relativistic kinematics possibly capturing quantum gravity effects. In this contribution we show how they naturally lead to a generalized quantum time evolution of the type proposed to model fundamental decoherence for quantum systems in the presence of an evaporating black hole. The same structures which determine such generalized evolution also lead to a modification of the action of discrete symmetries and of the CPT operator. These features can in principle be used to put phenomenological constraints on models of deformed relativistic symmetries using precision measurements of neutral kaons.

  9. General method to find the attractors of discrete dynamic models of biological systems.

    PubMed

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  10. General method to find the attractors of discrete dynamic models of biological systems

    NASA Astrophysics Data System (ADS)

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  11. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  12. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  13. A discrete mesoscopic particle model of the mechanics of a multi-constituent arterial wall.

    PubMed

    Witthoft, Alexandra; Yazdani, Alireza; Peng, Zhangli; Bellini, Chiara; Humphrey, Jay D; Karniadakis, George Em

    2016-01-01

    Blood vessels have unique properties that allow them to function together within a complex, self-regulating network. The contractile capacity of the wall combined with complex mechanical properties of the extracellular matrix enables vessels to adapt to changes in haemodynamic loading. Homogenized phenomenological and multi-constituent, structurally motivated continuum models have successfully captured these mechanical properties, but truly describing intricate microstructural details of the arterial wall may require a discrete framework. Such an approach would facilitate modelling interactions between or the separation of layers of the wall and would offer the advantage of seamless integration with discrete models of complex blood flow. We present a discrete particle model of a multi-constituent, nonlinearly elastic, anisotropic arterial wall, which we develop using the dissipative particle dynamics method. Mimicking basic features of the microstructure of the arterial wall, the model comprises an elastin matrix having isotropic nonlinear elastic properties plus anisotropic fibre reinforcement that represents the stiffer collagen fibres of the wall. These collagen fibres are distributed evenly and are oriented in four directions, symmetric to the vessel axis. Experimental results from biaxial mechanical tests of an artery are used for model validation, and a delamination test is simulated to demonstrate the new capabilities of the model. © 2016 The Author(s).

  14. Path integral measure and triangulation independence in discrete gravity

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Steinhaus, Sebastian

    2012-02-01

    A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in (linearized) Regge calculus, which is a discrete model for quantum gravity, appearing in the semi-classical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D (linearized) Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.

  15. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  16. A hybrid HDRF model of GOMS and SAIL: GOSAIL

    NASA Astrophysics Data System (ADS)

    Dou, B.; Wu, S.; Wen, J.

    2016-12-01

    Understanding the surface reflectance anisotropy is the key facet in interpreting the features of land surface from remotely sensed information, which describes the property of land surface to reflect the solar radiation directionally. Most reflectance anisotropy models assumed the nature surface was illuminated only by the direct solar radiation, while the diffuse skylight becomes dominant especially for the over cast sky conditions and high rugged terrain. Correcting the effect of diffuse skylight on the reflectance anisotropy to obtain the intrinsic directional reflectance of land surface is highly desirable for remote sensing applications. This paper developed a hybrid HDRF model of GOMS and SAIL called GOSAIL model for discrete canopies. The accurate area proportions of four scene components are calculated by the GOMS model and the spectral signatures of scene components are provided by the SAIL model. Both the single scattering contribution and the multiple scattering contributions within and between the canopy and background under the clear and diffuse illumination conditions are considered in the GOSAIL model. The HDRF simulated by the 3-D Discrete Anisotropic Radiative Transfer (DART) model and the HDRF measurements over the 100m×100m mature pine stand at the Järvselja, Estonia are used for validating and evaluating the performance of proposed GOSAIL model. The comparison results indicate the GOSAIL model can accurately reproducing the angular feature of discrete canopy for both the clear and overcast atmospheric conditions. The GOSAIL model is promising for the land surface biophysical parameters retrieval (e.g. albedo, leaf area index) over the heterogeneous terrain.

  17. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  18. Sensory processing and world modeling for an active ranging device

    NASA Technical Reports Server (NTRS)

    Hong, Tsai-Hong; Wu, Angela Y.

    1991-01-01

    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented.

  19. Fourth order discretization of anisotropic heat conduction operator

    NASA Astrophysics Data System (ADS)

    Krasheninnikova, Natalia; Chacon, Luis

    2008-11-01

    In magnetized plasmas, heat conduction plays an important role in such processes as energy confinement, turbulence, and a number of instabilities. As a consequence of the presence of a magnetic field, heat transport is strongly anisotropic, with energy flowing preferentially along the magnetic field direction. This in turn results in parallel and perpendicular heat conduction coefficients being separated by orders of magnitude. The computational difficulties in treating such heat conduction anisotropies are significant, as perpendicular dynamics numerically is polluted by the parallel one. In this work, we report on progress of the implementation of a fourth order, conservative finite volume discretization scheme for the anisotropic heat conduction operator into the extended MHD code PIXIE3D [1]. We will demonstrate its spatial discretization accuracy and its effectiveness with two physical applications of interest, both of which feature a strong sensitivity to the heat conduction anisotropy: the thermal instability and the neoclassical tearing mode. [1] L. Chacon Phys. Plasmas 15, 056103 (2008)

  20. Study on a discrete-time dynamic control model to enhance nitrogen removal with fluctuation of influent in oxidation ditches.

    PubMed

    Liu, Yanchen; Shi, Hanchang; Shi, Huiming; Wang, Zhiqiang

    2010-10-01

    The aim of study was proposed a new control model feasible on-line implemented by Programmable Logic Controller (PLC) to enhance nitrogen removal against the fluctuation of influent in Carrousel oxidation ditch. The discrete-time control model was established by confirmation model of operational conditions based on a expert access, which was obtained by a simulation using Activated Sludge Model 2-D (ASM2-D) and Computation Fluid Dynamics (CFD), and discrete-time control model to switch between different operational stages. A full-scale example is provided to demonstrate the feasibility of the proposed operation and the procedure of the control design. The effluent quality was substantially improved, to the extent that it met the new wastewater discharge standards of NH(3)-N<5mg/L and TN<15 mg/L enacted in China throughout a one-day period with fluctuation of influent. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Features in the primordial spectrum from WMAP: A wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun; Manimaran, P.

    2007-06-15

    Precise measurements of the anisotropies in the cosmic microwave background enable us to do an accurate study on the form of the primordial power spectrum for a given set of cosmological parameters. In a previous paper [A. Shafieloo and T. Souradeep, Phys. Rev. D 70, 043523 (2004).], we implemented an improved (error sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the first year of WMAP data to determine the primordial power spectrum assuming a concordance cosmological model. This recovered spectrum has a likelihood far better than a scale invariant, or, 'best fit' scale free spectra ({delta}lnL{approx_equal}25 withmore » respect to the Harrison-Zeldovich spectrum, and, {delta}lnL{approx_equal}11 with respect to the power law spectrum with n{sub s}=0.95). In this paper we use the discrete wavelet transform (DWT) to decompose the local features of the recovered spectrum individually to study their effect and significance on the recovered angular power spectrum and hence the likelihood. We show that besides the infrared cutoff at the horizon scale, the associated features of the primordial power spectrum around the horizon have a significant effect on improving the likelihood. The strong features are localized at the horizon scale.« less

  2. Hidden Markov Model and Support Vector Machine based decoding of finger movements using Electrocorticography

    PubMed Central

    Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg

    2013-01-01

    Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces. PMID:24045504

  3. When to be discrete: the importance of time formulation in understanding animal movement.

    PubMed

    McClintock, Brett T; Johnson, Devin S; Hooten, Mevin B; Ver Hoef, Jay M; Morales, Juan M

    2014-01-01

    Animal movement is essential to our understanding of population dynamics, animal behavior, and the impacts of global change. Coupled with high-resolution biotelemetry data, exciting new inferences about animal movement have been facilitated by various specifications of contemporary models. These approaches differ, but most share common themes. One key distinction is whether the underlying movement process is conceptualized in discrete or continuous time. This is perhaps the greatest source of confusion among practitioners, both in terms of implementation and biological interpretation. In general, animal movement occurs in continuous time but we observe it at fixed discrete-time intervals. Thus, continuous time is conceptually and theoretically appealing, but in practice it is perhaps more intuitive to interpret movement in discrete intervals. With an emphasis on state-space models, we explore the differences and similarities between continuous and discrete versions of mechanistic movement models, establish some common terminology, and indicate under which circumstances one form might be preferred over another. Counter to the overly simplistic view that discrete- and continuous-time conceptualizations are merely different means to the same end, we present novel mathematical results revealing hitherto unappreciated consequences of model formulation on inferences about animal movement. Notably, the speed and direction of movement are intrinsically linked in current continuous-time random walk formulations, and this can have important implications when interpreting animal behavior. We illustrate these concepts in the context of state-space models with multiple movement behavior states using northern fur seal (Callorhinus ursinus) biotelemetry data.

  4. When to be discrete: The importance of time formulation in understanding animal movement

    USGS Publications Warehouse

    McClintock, Brett T.; Johnson, Devin S.; Hooten, Mevin B.; Ver Hoef, Jay M.; Morales, Juan M.

    2014-01-01

    Animal movement is essential to our understanding of population dynamics, animal behavior, and the impacts of global change. Coupled with high-resolution biotelemetry data, exciting new inferences about animal movement have been facilitated by various specifications of contemporary models. These approaches differ, but most share common themes. One key distinction is whether the underlying movement process is conceptualized in discrete or continuous time. This is perhaps the greatest source of confusion among practitioners, both in terms of implementation and biological interpretation. In general, animal movement occurs in continuous time but we observe it at fixed discrete-time intervals. Thus, continuous time is conceptually and theoretically appealing, but in practice it is perhaps more intuitive to interpret movement in discrete intervals. With an emphasis on state-space models, we explore the differences and similarities between continuous and discrete versions of mechanistic movement models, establish some common terminology, and indicate under which circumstances one form might be preferred over another. Counter to the overly simplistic view that discrete- and continuous-time conceptualizations are merely different means to the same end, we present novel mathematical results revealing hitherto unappreciated consequences of model formulation on inferences about animal movement. Notably, the speed and direction of movement are intrinsically linked in current continuous-time random walk formulations, and this can have important implications when interpreting animal behavior. We illustrate these concepts in the context of state-space models with multiple movement behavior states using northern fur seal (Callorhinus ursinus) biotelemetry data.

  5. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    PubMed

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  6. Smoothed Particle Hydrodynamics and its applications for multiphase flow and reactive transport in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Trask, Nathaniel; Pan, K.

    2016-03-11

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian method based on a meshless discretization of partial differential equations. In this review, we present SPH discretization of the Navier-Stokes and Advection-Diffusion-Reaction equations, implementation of various boundary conditions, and time integration of the SPH equations, and we discuss applications of the SPH method for modeling pore-scale multiphase flows and reactive transport in porous and fractured media.

  7. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data.

    PubMed

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C; Keijsers, Loes; Hamaker, Ellen L

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  8. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    PubMed Central

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C.; Keijsers, Loes; Hamaker, Ellen L.

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available. PMID:29104554

  9. A case study on Discrete Wavelet Transform based Hurst exponent for epilepsy detection.

    PubMed

    Madan, Saiby; Srivastava, Kajri; Sharmila, A; Mahalakshmi, P

    2018-01-01

    Epileptic seizures are manifestations of epilepsy. Careful analysis of EEG records can provide valuable insight and improved understanding of the mechanism causing epileptic disorders. The detection of epileptic form discharges in EEG is an important component in the diagnosis of epilepsy. As EEG signals are non-stationary, the conventional frequency and time domain analysis does not provide better accuracy. So, in this work an attempt has been made to provide an overview of the determination of epilepsy by implementation of Hurst exponent (HE)-based discrete wavelet transform techniques for feature extraction from EEG data sets obtained during ictal and pre ictal stages of affected person and finally classifying EEG signals using SVM and KNN Classifiers. The The highest accuracy of 99% is obtained using SVM.

  10. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  11. The Flow Dimension and Aquifer Heterogeneity: Field evidence and Numerical Analyses

    NASA Astrophysics Data System (ADS)

    Walker, D. D.; Cello, P. A.; Valocchi, A. J.; Roberts, R. M.; Loftis, B.

    2008-12-01

    The Generalized Radial Flow approach to hydraulic test interpretation infers the flow dimension to describe the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling. The comparison shows that discrete linear features with lengths distributed as a power-law appear to be the most consistent with observations of the flow dimension in fractured dolomite aquifers.

  12. Hybrid diagnostic system: beacon-based exception analysis for multimissions - Livingstone integration

    NASA Technical Reports Server (NTRS)

    Park, Han G.; Cannon, Howard; Bajwa, Anupa; Mackey, Ryan; James, Mark; Maul, William

    2004-01-01

    This paper describes the initial integration of a hybrid reasoning system utilizing a continuous domain feature-based detector, Beacon-based Exceptions Analysis for Multimissions (BEAM), and a discrete domain model-based reasoner, Livingstone.

  13. Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Coetzer, J.; Herbst, B. M.; du Preez, J. A.

    2004-12-01

    We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.

  14. Recognizing emotions from EEG subbands using wavelet analysis.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Handojoseno, Ardi; Chai, Rifai; Su, Steven; Nguyen, Hung T

    2015-01-01

    Objectively recognizing emotions is a particularly important task to ensure that patients with emotional symptoms are given the appropriate treatments. The aim of this study was to develop an emotion recognition system using Electroencephalogram (EEG) signals to identify four emotions including happy, sad, angry, and relaxed. We approached this objective by firstly investigating the relevant EEG frequency band followed by deciding the appropriate feature extraction method. Two features were considered namely: 1. Wavelet Energy, and 2. Wavelet Entropy. EEG Channels reduction was then implemented to reduce the complexity of the features. The ground truth emotional states of each subject were inferred using Russel's circumplex model of emotion, that is, by mapping the subjectively reported degrees of valence (pleasure) and arousal to the appropriate emotions - for example, an emotion with high valence and high arousal is equivalent to a `happy' emotional state, while low valence and low arousal is equivalent to a `sad' emotional state. The Support Vector Machine (SVM) classifier was then used for mapping each feature vector into corresponding discrete emotions. The results presented in this study indicated thatWavelet features extracted from alpha, beta and gamma bands seem to provide the necessary information for describing the aforementioned emotions. Using the DEAP (Dataset for Emotion Analysis using electroencephalogram, Physiological and Video Signals), our proposed method achieved an average sensitivity and specificity of 77.4% ± 14.1% and 69.1% ± 12.8%, respectively.

  15. BASIMO - Borehole Heat Exchanger Array Simulation and Optimization Tool

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Bastian, Welsch; Wolfram, Rühaak; Kristian, Bär; Ingo, Sass

    2017-04-01

    Arrays of borehole heat exchangers are an increasingly popular source for renewable energy. Furthermore, they can serve as borehole thermal energy storage (BTES) systems for seasonally fluctuating heat sources like solar thermal energy or district heating grids. The high temperature level of these heat sources prohibits the use of the shallow subsurface for environmental reasons. Therefore, deeper reservoirs have to be accessed instead. The increased depth of the systems results in high investment costs and has hindered the implementation of this technology until now. Therefore, research of medium deep BTES systems relies on numerical simulation models. Current simulation tools cannot - or only to some extent - describe key features like partly insulated boreholes unless they run fully discretized models of the borehole heat exchangers. However, fully discretized models often come at a high computational cost, especially for large arrays of borehole heat exchangers. We give an update on the development of BASIMO: a tool, which uses one dimensional thermal resistance and capacity models for the borehole heat exchangers coupled with a numerical finite element model for the subsurface heat transport in a dual-continuum approach. An unstructured tetrahedral mesh bypasses the limitations of structured grids for borehole path geometries, while the thermal resistance and capacity model is improved to account for borehole heat exchanger properties changing with depth. Thereby, partly insulated boreholes can be considered in the model. Furthermore, BASIMO can be used to improve the design of BTES systems: the tool allows for automated parameter variations and is readily coupled to other code like mathematical optimization algorithms. Optimization can be used to determine the required minimum system size or to increase the system performance.

  16. A multiple indicator solution approach to endogeneity in discrete-choice models for environmental valuation.

    PubMed

    Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo

    2018-08-15

    Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.

  18. Predicting Student Success: A Naïve Bayesian Application to Community College Data

    ERIC Educational Resources Information Center

    Ornelas, Fermin; Ordonez, Carlos

    2017-01-01

    This research focuses on developing and implementing a continuous Naïve Bayesian classifier for GEAR courses at Rio Salado Community College. Previous implementation efforts of a discrete version did not predict as well, 70%, and had deployment issues. This predictive model has higher prediction, over 90%, accuracy for both at-risk and successful…

  19. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  20. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  1. An implicit numerical model for multicomponent compressible two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Zidane, Ali; Firoozabadi, Abbas

    2015-11-01

    We introduce a new implicit approach to model multicomponent compressible two-phase flow in porous media with species transfer between the phases. In the implicit discretization of the species transport equation in our formulation we calculate for the first time the derivative of the molar concentration of component i in phase α (cα, i) with respect to the total molar concentration (ci) under the conditions of a constant volume V and temperature T. The species transport equation is discretized by the finite volume (FV) method. The fluxes are calculated based on powerful features of the mixed finite element (MFE) method which provides the pressure at grid-cell interfaces in addition to the pressure at the grid-cell center. The efficiency of the proposed model is demonstrated by comparing our results with three existing implicit compositional models. Our algorithm has low numerical dispersion despite the fact it is based on first-order space discretization. The proposed algorithm is very robust.

  2. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  3. A method of power analysis based on piecewise discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Xin, Miaomiao; Zhang, Yanchi; Xie, Da

    2018-04-01

    The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.

  4. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  5. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    PubMed

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  6. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  7. Discrete space charge affected field emission: Flat and hemisphere emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L., E-mail: kevin.jensen@nrl.navy.mil; Shiffler, Donald A.; Tang, Wilkin

    Models of space-charge affected thermal-field emission from protrusions, able to incorporate the effects of both surface roughness and elongated field emitter structures in beam optics codes, are desirable but difficult. The models proposed here treat the meso-scale diode region separate from the micro-scale regions characteristic of the emission sites. The consequences of discrete emission events are given for both one-dimensional (sheets of charge) and three dimensional (rings of charge) models: in the former, results converge to steady state conditions found by theory (e.g., Rokhlenko et al. [J. Appl. Phys. 107, 014904 (2010)]) but show oscillatory structure as they do. Surfacemore » roughness or geometric features are handled using a ring of charge model, from which the image charges are found and used to modify the apex field and emitted current. The roughness model is shown to have additional constraints related to the discrete nature of electron charge. The ability of a unit cell model to treat field emitter structures and incorporate surface roughness effects inside a beam optics code is assessed.« less

  8. Solving Rational Expectations Models Using Excel

    ERIC Educational Resources Information Center

    Strulik, Holger

    2004-01-01

    Simple problems of discrete-time optimal control can be solved using a standard spreadsheet software. The employed-solution method of backward iteration is intuitively understandable, does not require any programming skills, and is easy to implement so that it is suitable for classroom exercises with rational-expectations models. The author…

  9. A minimum attention control law for ball catching.

    PubMed

    Jang, Cheongjae; Lee, Jee-eun; Lee, Sohee; Park, F C

    2015-10-06

    Digital implementations of control laws typically involve discretization with respect to both time and space, and a control law that can achieve a task at coarser levels of discretization can be said to require less control attention, and also reduced implementation costs. One means of quantitatively capturing the attention of a control law is to measure the rate of change of the control with respect to changes in state and time. In this paper we present an attention-minimizing control law for ball catching and other target tracking tasks based on Brockett's attention criterion. We first highlight the connections between this attention criterion and some well-known principles from human motor control. Under the assumption that the optimal control law is the sum of a linear time-varying feedback term and a time-varying feedforward term, we derive an LQR-based minimum attention tracking control law that is stable, and obtained efficiently via a finite-dimensional optimization over the symmetric positive-definite matrices. Taking ball catching as our primary task, we perform numerical experiments comparing the performance of the various control strategies examined in the paper. Consistent with prevailing theories about human ball catching, our results exhibit several familiar features, e.g., the transition from open-loop to closed-loop control during the catching movement, and improved robustness to spatiotemporal discretization. The presented control laws are applicable to more general tracking problems that are subject to limited communication resources.

  10. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    PubMed

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  11. A preference-ordered discrete-gaming approach to air-combat analysis

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Lefton, L.

    1978-01-01

    An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.

  12. Ultrasound waiting lists: rational queue or extended capacity?

    PubMed

    Brasted, Christopher

    2008-06-01

    The features and issues regarding clinical waiting lists in general and general ultrasound waiting lists in particular are reviewed, and operational aspects of providing a general ultrasound service are also discussed. A case study is presented describing a service improvement intervention in a UK NHS hospital's ultrasound department, from which arises requirements for a predictive planning model for an ultrasound waiting list. In the course of this, it becomes apparent that a booking system is a more appropriate way of describing the waiting list than a conventional queue. Distinctive features are identified from the literature and the case study as the basis for a predictive model, and a discrete event simulation model is presented which incorporates the distinctive features.

  13. Precise Modelling of Telluric Features in Astronomical Spectra

    NASA Astrophysics Data System (ADS)

    Seifahrt, A.; Käufl, H. U.; Zängl, G.; Bean, J.; Richter, M.; Siebenmorgen, R.

    2010-12-01

    Ground-based astronomical observations suffer from the disturbing effects of the Earth's atmosphere. Oxygen, water vapour and a number of atmospheric trace gases absorb and emit light at discrete frequencies, shaping observing bands in the near- and mid-infrared and leaving their fingerprints - telluric absorption and emission lines - in astronomical spectra. The standard approach of removing the absorption lines is to observe a telluric standard star: a time-consuming and often imperfect solution. Alternatively, the spectral features of the Earth's atmosphere can be modelled using a radiative transfer code, often delivering a satisfying solution that removes these features without additional observations. In addition the model also provides a precise wavelength solution and an instrumental profile.

  14. Discrete cloud structure on Neptune

    NASA Technical Reports Server (NTRS)

    Hammel, H. B.

    1989-01-01

    Recent CCD imaging data for the discrete cloud structure of Neptune shows that while cloud features at CH4-band wavelengths are manifest in the southern hemisphere, they have not been encountered in the northern hemisphere since 1986. A literature search has shown the reflected CH4-band light from the planet to have come from a single discrete feature at least twice in the last 10 years. Disk-integrated photometry derived from the imaging has demonstrated that a bright cloud feature was responsible for the observed 8900 A diurnal variation in 1986 and 1987.

  15. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  16. A more accurate scheme for calculating Earth's skin temperature

    NASA Astrophysics Data System (ADS)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  17. Comparison of computer based instruction to behavior skills training for teaching staff implementation of discrete-trial instruction with an adult with autism.

    PubMed

    Nosik, Melissa R; Williams, W Larry; Garrido, Natalia; Lee, Sarah

    2013-01-01

    In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following training, participants were evaluated in terms of their accuracy on completing critical skills for running a discrete trial program. Six participants completed training; three received behavior skills training and three received the computer based training. Participants in the BST group performed better overall after training and during six week probes than those in the computer based training group. There were differences across both groups between research assistant and natural environment competency levels. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Transport synthetic acceleration for long-characteristics assembly-level transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The authors apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, the authors take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. The main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme. The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. The authorsmore » devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, they define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. They implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; they prove that the long-characteristics discretization yields an SPD matrix. They present results of the acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly.« less

  19. Parallel numerical modeling of hybrid-dimensional compositional non-isothermal Darcy flows in fractured porous media

    NASA Astrophysics Data System (ADS)

    Xing, F.; Masson, R.; Lopez, S.

    2017-09-01

    This paper introduces a new discrete fracture model accounting for non-isothermal compositional multiphase Darcy flows and complex networks of fractures with intersecting, immersed and non-immersed fractures. The so called hybrid-dimensional model using a 2D model in the fractures coupled with a 3D model in the matrix is first derived rigorously starting from the equi-dimensional matrix fracture model. Then, it is discretized using a fully implicit time integration combined with the Vertex Approximate Gradient (VAG) finite volume scheme which is adapted to polyhedral meshes and anisotropic heterogeneous media. The fully coupled systems are assembled and solved in parallel using the Single Program Multiple Data (SPMD) paradigm with one layer of ghost cells. This strategy allows for a local assembly of the discrete systems. An efficient preconditioner is implemented to solve the linear systems at each time step and each Newton type iteration of the simulation. The numerical efficiency of our approach is assessed on different meshes, fracture networks, and physical settings in terms of parallel scalability, nonlinear convergence and linear convergence.

  20. High Order Accurate Finite Difference Modeling of Seismo-Acoustic Wave Propagation in a Moving Atmosphere and a Heterogeneous Earth Model Coupled Across a Realistic Topography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersson, N. Anders; Sjogreen, Bjorn

    Here, we develop a numerical method for simultaneously simulating acoustic waves in a realistic moving atmosphere and seismic waves in a heterogeneous earth model, where the motions are coupled across a realistic topography. We model acoustic wave propagation by solving the linearized Euler equations of compressible fluid mechanics. The seismic waves are modeled by the elastic wave equation in a heterogeneous anisotropic material. The motion is coupled by imposing continuity of normal velocity and normal stresses across the topographic interface. Realistic topography is resolved on a curvilinear grid that follows the interface. The governing equations are discretized using high ordermore » accurate finite difference methods that satisfy the principle of summation by parts. We apply the energy method to derive the discrete interface conditions and to show that the coupled discretization is stable. The implementation is verified by numerical experiments, and we demonstrate a simulation of coupled wave propagation in a windy atmosphere and a realistic earth model with non-planar topography.« less

  1. High Order Accurate Finite Difference Modeling of Seismo-Acoustic Wave Propagation in a Moving Atmosphere and a Heterogeneous Earth Model Coupled Across a Realistic Topography

    DOE PAGES

    Petersson, N. Anders; Sjogreen, Bjorn

    2017-04-18

    Here, we develop a numerical method for simultaneously simulating acoustic waves in a realistic moving atmosphere and seismic waves in a heterogeneous earth model, where the motions are coupled across a realistic topography. We model acoustic wave propagation by solving the linearized Euler equations of compressible fluid mechanics. The seismic waves are modeled by the elastic wave equation in a heterogeneous anisotropic material. The motion is coupled by imposing continuity of normal velocity and normal stresses across the topographic interface. Realistic topography is resolved on a curvilinear grid that follows the interface. The governing equations are discretized using high ordermore » accurate finite difference methods that satisfy the principle of summation by parts. We apply the energy method to derive the discrete interface conditions and to show that the coupled discretization is stable. The implementation is verified by numerical experiments, and we demonstrate a simulation of coupled wave propagation in a windy atmosphere and a realistic earth model with non-planar topography.« less

  2. More than a filter: Feature-based attention regulates the distribution of visual working memory resources.

    PubMed

    Dube, Blaire; Emrich, Stephen M; Al-Aidroos, Naseem

    2017-10-01

    Across 2 experiments we revisited the filter account of how feature-based attention regulates visual working memory (VWM). Originally drawing from discrete-capacity ("slot") models, the filter account proposes that attention operates like the "bouncer in the brain," preventing distracting information from being encoded so that VWM resources are reserved for relevant information. Given recent challenges to the assumptions of discrete-capacity models, we investigated whether feature-based attention plays a broader role in regulating memory. Both experiments used partial report tasks in which participants memorized the colors of circle and square stimuli, and we provided a feature-based goal by manipulating the likelihood that 1 shape would be probed over the other across a range of probabilities. By decomposing participants' responses using mixture and variable-precision models, we estimated the contributions of guesses, nontarget responses, and imprecise memory representations to their errors. Consistent with the filter account, participants were less likely to guess when the probed memory item matched the feature-based goal. Interestingly, this effect varied with goal strength, even across high probabilities where goal-matching information should always be prioritized, demonstrating strategic control over filter strength. Beyond this effect of attention on which stimuli were encoded, we also observed effects on how they were encoded: Estimates of both memory precision and nontarget errors varied continuously with feature-based attention. The results offer support for an extension to the filter account, where feature-based attention dynamically regulates the distribution of resources within working memory so that the most relevant items are encoded with the greatest precision. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Polynomial algebra of discrete models in systems biology.

    PubMed

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard

    2010-07-01

    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.

  5. A structure-preserving method for a class of nonlinear dissipative wave equations with Riesz space-fractional derivatives

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.

    2017-12-01

    In this manuscript, we consider an initial-boundary-value problem governed by a (1 + 1)-dimensional hyperbolic partial differential equation with constant damping that generalizes many nonlinear wave equations from mathematical physics. The model considers the presence of a spatial Laplacian of fractional order which is defined in terms of Riesz fractional derivatives, as well as the inclusion of a generic continuously differentiable potential. It is known that the undamped regime has an associated positive energy functional, and we show here that it is preserved throughout time under suitable boundary conditions. To approximate the solutions of this model, we propose a finite-difference discretization based on fractional centered differences. Some discrete quantities are proposed in this work to estimate the energy functional, and we show that the numerical method is capable of conserving the discrete energy under the same boundary conditions for which the continuous model is conservative. Moreover, we establish suitable computational constraints under which the discrete energy of the system is positive. The method is consistent of second order, and is both stable and convergent. The numerical simulations shown here illustrate the most important features of our numerical methodology.

  6. Discrete wavelength selection for the optical readout of a metamaterial biosensing system for glucose concentration estimation via a support vector regression model.

    PubMed

    Teutsch, T; Mesch, M; Giessen, H; Tarin, C

    2015-01-01

    In this contribution, a method to select discrete wavelengths that allow an accurate estimation of the glucose concentration in a biosensing system based on metamaterials is presented. The sensing concept is adapted to the particular application of ophthalmic glucose sensing by covering the metamaterial with a glucose-sensitive hydrogel and the sensor readout is performed optically. Due to the fact that in a mobile context a spectrometer is not suitable, few discrete wavelengths must be selected to estimate the glucose concentration. The developed selection methods are based on nonlinear support vector regression (SVR) models. Two selection methods are compared and it is shown that wavelengths selected by a sequential forward feature selection algorithm achieves an estimation improvement. The presented method can be easily applied to different metamaterial layouts and hydrogel configurations.

  7. Stability Analysis of Algebraic Reconstruction for Immersed Boundary Methods with Application in Flow and Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, M.; Battiato, I.

    2017-12-01

    Flow and reactive transport problems in porous media often involve complex geometries with stationary or evolving boundaries due to absorption and dissolution processes. Grid based methods (e.g. finite volume, finite element, etc.) are a vital tool for studying these problems. Yet, implementing these methods requires one to answer a very first question of what type of grid is to be used. Among different possible answers, Cartesian grids are one of the most attractive options as they possess simple discretization stencil and are usually straightforward to generate at roughly no computational cost. The Immersed Boundary Method, a Cartesian based methodology, maintains most of the useful features of the structured grids while exhibiting a high-level resilience in dealing with complex geometries. These features make it increasingly more attractive to model transport in evolving porous media as the cost of grid generation reduces greatly. Yet, stability issues and severe time-step restriction due to explicit-time implementation combined with limited studies on the implementation of Neumann (constant flux) and linear and non-linear Robin (e.g. reaction) boundary conditions (BCs) have significantly limited the applicability of IBMs to transport in porous media. We have developed an implicit IBM capable of handling all types of BCs and addressed some numerical issues, including unconditional stability criteria, compactness and reduction of spurious oscillations near the immersed boundary. We tested the method for several transport and flow scenarios, including dissolution processes in porous media, and demonstrate its capabilities. Successful validation against both experimental and numerical data has been carried out.

  8. Implementing the Standards. Teaching Discrete Mathematics in Grades 7-12.

    ERIC Educational Resources Information Center

    Hart, Eric W.; And Others

    1990-01-01

    Discrete mathematics are defined briefly. A course in discrete mathematics for high school students and teaching discrete mathematics in grades 7 and 8 including finite differences, recursion, and graph theory are discussed. (CW)

  9. A discrete-time adaptive control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  10. 3D Discrete element approach to the problem on abutment pressure in a gently dipping coal seam

    NASA Astrophysics Data System (ADS)

    Klishin, S. V.; Revuzhenko, A. F.

    2017-09-01

    Using the discrete element method, the authors have carried out 3D implementation of the problem on strength loss in surrounding rock mass in the vicinity of a production heading and on abutment pressure in a gently dripping coal seam. The calculation of forces at the contacts between particles accounts for friction, rolling resistance and viscosity. Between discrete particles modeling coal seam, surrounding rock mass and broken rocks, an elastic connecting element is introduced to allow simulating coherent materials. The paper presents the kinematic patterns of rock mass deformation, stresses in particles and the graph of the abutment pressure behavior in the coal seam.

  11. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    DOE PAGES

    Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott

    2017-12-21

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less

  12. Coupling discrete and continuum concentration particle models for multiscale and hybrid molecular-continuum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petsev, Nikolai Dimitrov; Leal, L. Gary; Shell, M. Scott

    Hybrid molecular-continuum simulation techniques afford a number of advantages for problems in the rapidly burgeoning area of nanoscale engineering and technology, though they are typically quite complex to implement and limited to single-component fluid systems. We describe an approach for modeling multicomponent hydrodynamic problems spanning multiple length scales when using particle-based descriptions for both the finely-resolved (e.g. molecular dynamics) and coarse-grained (e.g. continuum) subregions within an overall simulation domain. This technique is based on the multiscale methodology previously developed for mesoscale binary fluids [N. D. Petsev, L. G. Leal, and M. S. Shell, J. Chem. Phys. 144, 84115 (2016)], simulatedmore » using a particle-based continuum method known as smoothed dissipative particle dynamics (SDPD). An important application of this approach is the ability to perform coupled molecular dynamics (MD) and continuum modeling of molecularly miscible binary mixtures. In order to validate this technique, we investigate multicomponent hybrid MD-continuum simulations at equilibrium, as well as non-equilibrium cases featuring concentration gradients.« less

  13. Orienting in Virtual Environments: How Are Surface Features and Environmental Geometry Weighted in an Orientation Task?

    ERIC Educational Resources Information Center

    Kelly, Debbie M.; Bischof, Walter F.

    2008-01-01

    We investigated how human adults orient in enclosed virtual environments, when discrete landmark information is not available and participants have to rely on geometric and featural information on the environmental surfaces. In contrast to earlier studies, where, for women, the featural information from discrete landmarks overshadowed the encoding…

  14. Optimal region of latching activity in an adaptive Potts model for networks of neurons

    NASA Astrophysics Data System (ADS)

    Abdollah-nia, Mohammad-Farshad; Saeedghalati, Mohammadkarim; Abbassian, Abdolhossein

    2012-02-01

    In statistical mechanics, the Potts model is a model for interacting spins with more than two discrete states. Neural networks which exhibit features of learning and associative memory can also be modeled by a system of Potts spins. A spontaneous behavior of hopping from one discrete attractor state to another (referred to as latching) has been proposed to be associated with higher cognitive functions. Here we propose a model in which both the stochastic dynamics of Potts models and an adaptive potential function are present. A latching dynamics is observed in a limited region of the noise(temperature)-adaptation parameter space. We hence suggest noise as a fundamental factor in such alternations alongside adaptation. From a dynamical systems point of view, the noise-adaptation alternations may be the underlying mechanism for multi-stability in attractor-based models. An optimality criterion for realistic models is finally inferred.

  15. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  16. Multiscale modeling of dislocation-precipitate interactions in Fe: From molecular dynamics to discrete dislocations.

    PubMed

    Lehtinen, Arttu; Granberg, Fredric; Laurson, Lasse; Nordlund, Kai; Alava, Mikko J

    2016-01-01

    The stress-driven motion of dislocations in crystalline solids, and thus the ensuing plastic deformation process, is greatly influenced by the presence or absence of various pointlike defects such as precipitates or solute atoms. These defects act as obstacles for dislocation motion and hence affect the mechanical properties of the material. Here we combine molecular dynamics studies with three-dimensional discrete dislocation dynamics simulations in order to model the interaction between different kinds of precipitates and a 1/2〈111〉{110} edge dislocation in BCC iron. We have implemented immobile spherical precipitates into the ParaDis discrete dislocation dynamics code, with the dislocations interacting with the precipitates via a Gaussian potential, generating a normal force acting on the dislocation segments. The parameters used in the discrete dislocation dynamics simulations for the precipitate potential, the dislocation mobility, shear modulus, and dislocation core energy are obtained from molecular dynamics simulations. We compare the critical stresses needed to unpin the dislocation from the precipitate in molecular dynamics and discrete dislocation dynamics simulations in order to fit the two methods together and discuss the variety of the relevant pinning and depinning mechanisms.

  17. Hamiltonian dynamics of a quantum of space: hidden symmetries and spectrum of the volume operator, and discrete orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Aquilanti, Vincenzo; Marinelli, Dimitri; Marzuoli, Annalisa

    2013-05-01

    The action of the quantum mechanical volume operator, introduced in connection with a symmetric representation of the three-body problem and recently recognized to play a fundamental role in discretized quantum gravity models, can be given as a second-order difference equation which, by a complex phase change, we turn into a discrete Schrödinger-like equation. The introduction of discrete potential-like functions reveals the surprising crucial role here of hidden symmetries, first discovered by Regge for the quantum mechanical 6j symbols; insight is provided into the underlying geometric features. The spectrum and wavefunctions of the volume operator are discussed from the viewpoint of the Hamiltonian evolution of an elementary ‘quantum of space’, and a transparent asymptotic picture of the semiclassical and classical regimes emerges. The definition of coordinates adapted to the Regge symmetry is exploited for the construction of a novel set of discrete orthogonal polynomials, characterizing the oscillatory components of torsion-like modes.

  18. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  19. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  20. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  1. Hierarchical Task Network Prototyping In Unity3d

    DTIC Science & Technology

    2016-06-01

    visually debug. Here we present a solution for prototyping HTNs by extending an existing commercial implementation of Behavior Trees within the Unity3D game ...HTN, dynamic behaviors, behavior prototyping, agent-based simulation, entity-level combat model, game engine, discrete event simulation, virtual...commercial implementation of Behavior Trees within the Unity3D game engine prior to building the HTN in COMBATXXI. Existing HTNs were emulated within

  2. Network simulation using the simulation language for alternate modeling (SLAM 2)

    NASA Technical Reports Server (NTRS)

    Shen, S.; Morris, D. W.

    1983-01-01

    The simulation language for alternate modeling (SLAM 2) is a general purpose language that combines network, discrete event, and continuous modeling capabilities in a single language system. The efficacy of the system's network modeling is examined and discussed. Examples are given of the symbolism that is used, and an example problem and model are derived. The results are discussed in terms of the ease of programming, special features, and system limitations. The system offers many features which allow rapid model development and provides an informative standardized output. The system also has limitations which may cause undetected errors and misleading reports unless the user is aware of these programming characteristics.

  3. Electromagnetic scattering and radiation from microstrip patch antennas and spirals residing in a cavity

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Gong, J.; Alexanian, A.; Woo, A.

    1992-01-01

    A new hybrid method is presented for the analysis of the scattering and radiation by conformal antennas and arrays comprised of circular or rectangular elements. In addition, calculations for cavity-backed spiral antennas are given. The method employs a finite element formulation within the cavity and the boundary integral (exact boundary condition) for terminating the mesh. By virtue of the finite element discretization, the method has no restrictions on the geometry and composition of the cavity or its termination. Furthermore, because of the convolutional nature of the boundary integral and the inherent sparseness of the finite element matrix, the storage requirement is kept very low at O(n). These unique features of the method have already been exploited in other scattering applications and have permitted the analysis of large-size structures with remarkable efficiency. In this report, we describe the method's formulation and implementation for circular and rectangular patch antennas in different superstrate and substrate configurations which may also include the presence of lumped loads and resistive sheets/cards. Also, various modelling approaches are investigated and implemented for characterizing a variety of feed structures to permit the computation of the input impedance and radiation pattern. Many computational examples for rectangular and circular patch configurations are presented which demonstrate the method's versatility, modeling capability and accuracy.

  4. Combined discrete particle and continuum model predicting solid-state fermentation in a drum fermentor.

    PubMed

    Schutyser, M A I; Briels, W J; Boom, R M; Rinzema, A

    2004-05-20

    The development of mathematical models facilitates industrial (large-scale) application of solid-state fermentation (SSF). In this study, a two-phase model of a drum fermentor is developed that consists of a discrete particle model (solid phase) and a continuum model (gas phase). The continuum model describes the distribution of air in the bed injected via an aeration pipe. The discrete particle model describes the solid phase. In previous work, mixing during SSF was predicted with the discrete particle model, although mixing simulations were not carried out in the current work. Heat and mass transfer between the two phases and biomass growth were implemented in the two-phase model. Validation experiments were conducted in a 28-dm3 drum fermentor. In this fermentor, sufficient aeration was provided to control the temperatures near the optimum value for growth during the first 45-50 hours. Several simulations were also conducted for different fermentor scales. Forced aeration via a single pipe in the drum fermentors did not provide homogeneous cooling in the substrate bed. Due to large temperature gradients, biomass yield decreased severely with increasing size of the fermentor. Improvement of air distribution would be required to avoid the need for frequent mixing events, during which growth is hampered. From these results, it was concluded that the two-phase model developed is a powerful tool to investigate design and scale-up of aerated (mixed) SSF fermentors. Copyright 2004 Wiley Periodicals, Inc.

  5. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    PubMed

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  6. A multidimensional representation model of geographic features

    USGS Publications Warehouse

    Usery, E. Lynn; Timson, George; Coletti, Mark

    2016-01-28

    A multidimensional model of geographic features has been developed and implemented with data from The National Map of the U.S. Geological Survey. The model, programmed in C++ and implemented as a feature library, was tested with data from the National Hydrography Dataset demonstrating the capability to handle changes in feature attributes, such as increases in chlorine concentration in a stream, and feature geometry, such as the changing shoreline of barrier islands over time. Data can be entered directly, from a comma separated file, or features with attributes and relationships can be automatically populated in the model from data in the Spatial Data Transfer Standard format.

  7. Comparison of algorithms for solving the sign problem in the O(3) model in 1 +1 dimensions at finite chemical potential

    NASA Astrophysics Data System (ADS)

    Katz, S. D.; Niedermayer, F.; Nógrádi, D.; Török, Cs.

    2017-03-01

    We study three possible ways to circumvent the sign problem in the O(3) nonlinear sigma model in 1 +1 dimensions. We compare the results of the worm algorithm to complex Langevin and multiparameter reweighting. Using the worm algorithm, the thermodynamics of the model is investigated, and continuum results are shown for the pressure at different μ /T values in the range 0-4. By performing T =0 simulations using the worm algorithm, the Silver Blaze phenomenon is reproduced. Regarding the complex Langevin, we test various implementations of discretizing the complex Langevin equation. We found that the exponentialized Euler discretization of the Langevin equation gives wrong results for the action and the density at low T /m . By performing a continuum extrapolation, we found that this discrepancy does not disappear and depends slightly on temperature. The discretization with spherical coordinates performs similarly at low μ /T but breaks down also at some higher temperatures at high μ /T . However, a third discretization that uses a constraining force to achieve the ϕ2=1 condition gives correct results for the action but wrong results for the density at low μ /T .

  8. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  10. Implementation of quantum and classical discrete fractional Fourier transforms.

    PubMed

    Weimann, Steffen; Perez-Leija, Armando; Lebugle, Maxime; Keil, Robert; Tichy, Malte; Gräfe, Markus; Heilmann, René; Nolte, Stefan; Moya-Cessa, Hector; Weihs, Gregor; Christodoulides, Demetrios N; Szameit, Alexander

    2016-03-23

    Fourier transforms, integer and fractional, are ubiquitous mathematical tools in basic and applied science. Certainly, since the ordinary Fourier transform is merely a particular case of a continuous set of fractional Fourier domains, every property and application of the ordinary Fourier transform becomes a special case of the fractional Fourier transform. Despite the great practical importance of the discrete Fourier transform, implementation of fractional orders of the corresponding discrete operation has been elusive. Here we report classical and quantum optical realizations of the discrete fractional Fourier transform. In the context of classical optics, we implement discrete fractional Fourier transforms of exemplary wave functions and experimentally demonstrate the shift theorem. Moreover, we apply this approach in the quantum realm to Fourier transform separable and path-entangled biphoton wave functions. The proposed approach is versatile and could find applications in various fields where Fourier transforms are essential tools.

  11. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  12. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  13. Implementation of quantum and classical discrete fractional Fourier transforms

    PubMed Central

    Weimann, Steffen; Perez-Leija, Armando; Lebugle, Maxime; Keil, Robert; Tichy, Malte; Gräfe, Markus; Heilmann, René; Nolte, Stefan; Moya-Cessa, Hector; Weihs, Gregor; Christodoulides, Demetrios N.; Szameit, Alexander

    2016-01-01

    Fourier transforms, integer and fractional, are ubiquitous mathematical tools in basic and applied science. Certainly, since the ordinary Fourier transform is merely a particular case of a continuous set of fractional Fourier domains, every property and application of the ordinary Fourier transform becomes a special case of the fractional Fourier transform. Despite the great practical importance of the discrete Fourier transform, implementation of fractional orders of the corresponding discrete operation has been elusive. Here we report classical and quantum optical realizations of the discrete fractional Fourier transform. In the context of classical optics, we implement discrete fractional Fourier transforms of exemplary wave functions and experimentally demonstrate the shift theorem. Moreover, we apply this approach in the quantum realm to Fourier transform separable and path-entangled biphoton wave functions. The proposed approach is versatile and could find applications in various fields where Fourier transforms are essential tools. PMID:27006089

  14. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  15. Incorporation of Plasticity and Damage Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blackenhorn, Gunther

    2015-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed model.

  16. Monte Carlo algorithms for Brownian phylogenetic models.

    PubMed

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-11-01

    Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Hybrid simulation combining two space-time discretization of the discrete-velocity Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel

    2017-11-01

    Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.

  18. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  19. Comparison of spike-sorting algorithms for future hardware implementation.

    PubMed

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  20. Electro-mechanical dynamics of spiral waves in a discrete 2D model of human atrial tissue.

    PubMed

    Brocklehurst, Paul; Ni, Haibo; Zhang, Henggui; Ye, Jianqiao

    2017-01-01

    We investigate the effect of mechano-electrical feedback and atrial fibrillation induced electrical remodelling (AFER) of cellular ion channel properties on the dynamics of spiral waves in a discrete 2D model of human atrial tissue. The tissue electro-mechanics are modelled using the discrete element method (DEM). Millions of bonded DEM particles form a network of coupled atrial cells representing 2D cardiac tissue, allowing simulations of the dynamic behaviour of electrical excitation waves and mechanical contraction in the tissue. In the tissue model, each cell is modelled by nine particles, accounting for the features of individual cellular geometry; and discrete inter-cellular spatial arrangement of cells is also considered. The electro-mechanical model of a human atrial single-cell was constructed by strongly coupling the electrophysiological model of Colman et al. to the mechanical myofilament model of Rice et al., with parameters modified based on experimental data. A stretch-activated channel was incorporated into the model to simulate the mechano-electrical feedback. In order to investigate the effect of mechano-electrical feedback on the dynamics of spiral waves, simulations of spiral waves were conducted in both the electromechanical model and the electrical-only model in normal and AFER conditions, to allow direct comparison of the results between the models. Dynamics of spiral waves were characterized by tracing their tip trajectories, stability, excitation frequencies and meandering range of tip trajectories. It was shown that the developed DEM method provides a stable and efficient model of human atrial tissue with considerations of the intrinsically discrete and anisotropic properties of the atrial tissue, which are challenges to handle in traditional continuum mechanics models. This study provides mechanistic insights into the complex behaviours of spiral waves and the genesis of atrial fibrillation by showing an important role of the mechano-electrical feedback in facilitating and promoting atrial fibrillation.

  1. Electro-mechanical dynamics of spiral waves in a discrete 2D model of human atrial tissue

    PubMed Central

    Zhang, Henggui

    2017-01-01

    We investigate the effect of mechano-electrical feedback and atrial fibrillation induced electrical remodelling (AFER) of cellular ion channel properties on the dynamics of spiral waves in a discrete 2D model of human atrial tissue. The tissue electro-mechanics are modelled using the discrete element method (DEM). Millions of bonded DEM particles form a network of coupled atrial cells representing 2D cardiac tissue, allowing simulations of the dynamic behaviour of electrical excitation waves and mechanical contraction in the tissue. In the tissue model, each cell is modelled by nine particles, accounting for the features of individual cellular geometry; and discrete inter-cellular spatial arrangement of cells is also considered. The electro-mechanical model of a human atrial single-cell was constructed by strongly coupling the electrophysiological model of Colman et al. to the mechanical myofilament model of Rice et al., with parameters modified based on experimental data. A stretch-activated channel was incorporated into the model to simulate the mechano-electrical feedback. In order to investigate the effect of mechano-electrical feedback on the dynamics of spiral waves, simulations of spiral waves were conducted in both the electromechanical model and the electrical-only model in normal and AFER conditions, to allow direct comparison of the results between the models. Dynamics of spiral waves were characterized by tracing their tip trajectories, stability, excitation frequencies and meandering range of tip trajectories. It was shown that the developed DEM method provides a stable and efficient model of human atrial tissue with considerations of the intrinsically discrete and anisotropic properties of the atrial tissue, which are challenges to handle in traditional continuum mechanics models. This study provides mechanistic insights into the complex behaviours of spiral waves and the genesis of atrial fibrillation by showing an important role of the mechano-electrical feedback in facilitating and promoting atrial fibrillation. PMID:28510575

  2. Discrete-Element bonded-particle Sea Ice model DESIgn, version 1.3a - model description and implementation

    NASA Astrophysics Data System (ADS)

    Herman, Agnieszka

    2016-04-01

    This paper presents theoretical foundations, numerical implementation and examples of application of the two-dimensional Discrete-Element bonded-particle Sea Ice model - DESIgn. In the model, sea ice is represented as an assemblage of objects of two types: disk-shaped "grains" and semi-elastic bonds connecting them. Grains move on the sea surface under the influence of forces from the atmosphere and the ocean, as well as interactions with surrounding grains through direct contact (Hertzian contact mechanics) and/or through bonds. The model has an experimental option of taking into account quasi-three-dimensional effects related to the space- and time-varying curvature of the sea surface, thus enabling simulation of ice breaking due to stresses resulting from bending moments associated with surface waves. Examples of the model's application to simple sea ice deformation and breaking problems are presented, with an analysis of the influence of the basic model parameters ("microscopic" properties of grains and bonds) on the large-scale response of the modeled material. The model is written as a toolbox suitable for usage with the open-source numerical library LIGGGHTS. The code, together with full technical documentation and example input files, is freely available with this paper and on the Internet.

  3. Diffusion models of the flanker task: Discrete versus gradual attentional selection

    PubMed Central

    White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.

    2011-01-01

    The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663

  4. A hybrid-system model of the coagulation cascade: simulation, sensitivity, and validation.

    PubMed

    Makin, Joseph G; Narayanan, Srini

    2013-10-01

    The process of human blood clotting involves a complex interaction of continuous-time/continuous-state processes and discrete-event/discrete-state phenomena, where the former comprise the various chemical rate equations and the latter comprise both threshold-limited behaviors and binary states (presence/absence of a chemical). Whereas previous blood-clotting models used only continuous dynamics and perforce addressed only portions of the coagulation cascade, we capture both continuous and discrete aspects by modeling it as a hybrid dynamical system. The model was implemented as a hybrid Petri net, a graphical modeling language that extends ordinary Petri nets to cover continuous quantities and continuous-time flows. The primary focus is simulation: (1) fidelity to the clinical data in terms of clotting-factor concentrations and elapsed time; (2) reproduction of known clotting pathologies; and (3) fine-grained predictions which may be used to refine clinical understanding of blood clotting. Next we examine sensitivity to rate-constant perturbation. Finally, we propose a method for titrating between reliance on the model and on prior clinical knowledge. For simplicity, we confine these last two analyses to a critical purely-continuous subsystem of the model.

  5. Influence of hydrodynamic thrust bearings on the nonlinear oscillations of high-speed rotors

    NASA Astrophysics Data System (ADS)

    Chatzisavvas, Ioannis; Boyaci, Aydin; Koutsovasilis, Panagiotis; Schweizer, Bernhard

    2016-10-01

    This paper investigates the effect of hydrodynamic thrust bearings on the nonlinear vibrations and the bifurcations occurring in rotor/bearing systems. In order to examine the influence of thrust bearings, run-up simulations may be carried out. To be able to perform such run-up calculations, a computationally efficient thrust bearing model is mandatory. Direct discretization of the Reynolds equation for thrust bearings by means of a Finite Element or Finite Difference approach entails rather large simulation times, since in every time-integration step a discretized model of the Reynolds equation has to be solved simultaneously with the rotor model. Implementation of such a coupled rotor/bearing model may be accomplished by a co-simulation approach. Such an approach prevents, however, a thorough analysis of the rotor/bearing system based on extensive parameter studies. A major point of this work is the derivation of a very time-efficient but rather precise model for transient simulations of rotors with hydrodynamic thrust bearings. The presented model makes use of a global Galerkin approach, where the pressure field is approximated by global trial functions. For the considered problem, an analytical evaluation of the relevant integrals is possible. As a consequence, the system of equations of the discretized bearing model is obtained symbolically. In combination with a proper decomposition of the governing system matrix, a numerically efficient implementation can be achieved. Using run-up simulations with the proposed model, the effect of thrust bearings on the bifurcations points as well as on the amplitudes and frequencies of the subsynchronous rotor oscillations is investigated. Especially, the influence of the magnitude of the axial force, the geometry of the thrust bearing and the oil parameters is examined. It is shown that the thrust bearing exerts a large influence on the nonlinear rotor oscillations, especially to those related with the conical mode of the rotor. A comparison between a full co-simulation approach and a reduced Galerkin implementation is carried out. It is shown that a speed-up of 10-15 times may be obtained with the Galerkin model compared to the co-simulation model under the same accuracy.

  6. Discrete subgroups of adolescents diagnosed with borderline personality disorder: a latent class analysis of personality features.

    PubMed

    Ramos, Vera; Canta, Guilherme; de Castro, Filipa; Leal, Isabel

    2014-08-01

    Research suggests that borderline personality disorder (BPD) can be diagnosed in adolescents and is marked by considerable heterogeneity. This study aimed to identify personality features characterizing adolescents with BPD and possible meaningful patterns of heterogeneity that could lead to personality subgroups. The authors analyzed data on 60 adolescents, ages 15 to 18 years, who met DSM criteria for a BPD diagnosis. The authors used latent class analysis (LCA) to identify subgroups based on the personality pattern scales from the Millon Adolescent Clinical Inventory (MACI). LCA indicated that the best-fitting solution was a two-class model, identifying two discrete subgroups of BPD adolescents that were described as internalizing and externalizing. The subgroups were then compared on clinical and sociodemographic variables, measures of personality dimensions, DSM BPD criteria, and perception of attachment styles. Adolescents with a BPD diagnosis constitute a heterogeneous group and vary meaningfully on personality features that can have clinical implications for treatment.

  7. The medical humanities and the perils of curricular integration.

    PubMed

    Chiavaroli, Neville; Ellwood, Constance

    2012-12-01

    The advent of integration as a feature of contemporary medical curricula can be seen as an advantage for the medical humanities in that it provides a clear implementation strategy for the inclusion of medical humanities content and/or perspectives, while also making its relevance to medical education more apparent. This paper discusses an example of integration of humanities content into a graduate medical course, raises questions about the desirability of an exclusively integrated approach, and argues for the value of retaining a discrete and coherent disciplinary presence for the medical humanities in medical curricula.

  8. Scattering engineering in continuously shaped metasurface: An approach for electromagnetic illusion

    PubMed Central

    Guo, Yinghui; Yan, Lianshan; Pan, Wei; Shao, Liyang

    2016-01-01

    The control of electromagnetic waves scattering is critical in wireless communications and stealth technology. Discrete metasurfaces not only increase the design and fabrication complex but also cause difficulties in obtaining simultaneous electric and optical functionality. On the other hand, discontinuous phase profiles fostered by discrete systems inevitably introduce phase noises to the scattering fields. Here we propose the principle of a scattering-harness mechanism by utilizing continuous gradient phase stemming from the spin-orbit interaction via sinusoidal metallic strips. Furthermore, by adjusting the amplitude and period of the sinusoidal metallic strip, the scattering characteristics of the underneath object can be greatly changed and thus result in electromagnetic illusion. The proposal is validated by full-wave simulations and experiment characterization in microwave band. Our approach featured by continuous phase profile, polarization independent performance and facile implementation may find widespread applications in electromagnetic wave manipulation. PMID:27439474

  9. Scattering engineering in continuously shaped metasurface: An approach for electromagnetic illusion

    NASA Astrophysics Data System (ADS)

    Guo, Yinghui; Yan, Lianshan; Pan, Wei; Shao, Liyang

    2016-07-01

    The control of electromagnetic waves scattering is critical in wireless communications and stealth technology. Discrete metasurfaces not only increase the design and fabrication complex but also cause difficulties in obtaining simultaneous electric and optical functionality. On the other hand, discontinuous phase profiles fostered by discrete systems inevitably introduce phase noises to the scattering fields. Here we propose the principle of a scattering-harness mechanism by utilizing continuous gradient phase stemming from the spin-orbit interaction via sinusoidal metallic strips. Furthermore, by adjusting the amplitude and period of the sinusoidal metallic strip, the scattering characteristics of the underneath object can be greatly changed and thus result in electromagnetic illusion. The proposal is validated by full-wave simulations and experiment characterization in microwave band. Our approach featured by continuous phase profile, polarization independent performance and facile implementation may find widespread applications in electromagnetic wave manipulation.

  10. Documentation for the MODFLOW 6 Groundwater Flow Model

    USGS Publications Warehouse

    Langevin, Christian D.; Hughes, Joseph D.; Banta, Edward R.; Niswonger, Richard G.; Panday, Sorab; Provost, Alden M.

    2017-08-10

    This report documents the Groundwater Flow (GWF) Model for a new version of MODFLOW called MODFLOW 6. The GWF Model for MODFLOW 6 is based on a generalized control-volume finite-difference approach in which a cell can be hydraulically connected to any number of surrounding cells. Users can define the model grid using one of three discretization packages, including (1) a structured discretization package for defining regular MODFLOW grids consisting of layers, rows, and columns, (2) a discretization by ver­tices package for defining layered unstructured grids consisting of layers and cells, and (3) a general unstruc­tured discretization package for defining flexible grids comprised of cells and their connection properties. For layered grids, a new capability is available for removing thin cells and vertically connecting cells overlying and underlying the thin cells. For complex problems involving water-table conditions, an optional Newton-Raphson formulation, based on the formulations in MODFLOW-NWT and MODFLOW-USG, can be acti­vated. Use of the Newton-Raphson formulation will often improve model convergence and allow solutions to be obtained for difficult problems that cannot be solved using the traditional wetting and drying approach. The GWF Model is divided into “packages,” as was done in previous MODFLOW versions. A package is the part of the model that deals with a single aspect of simulation. Packages included with the GWF Model include those related to internal calculations of groundwater flow (discretization, initial conditions, hydraulic conduc­tance, and storage), stress packages (constant heads, wells, recharge, rivers, general head boundaries, drains, and evapotranspiration), and advanced stress packages (streamflow routing, lakes, multi-aquifer wells, and unsaturated zone flow). An additional package is also available for moving water available in one package into the individual features of the advanced stress packages. The GWF Model also has packages for obtaining and controlling output from the model. This report includes detailed explanations of physical and mathematical concepts on which the GWF Model and its packages are based.Like its predecessors, MODFLOW 6 is based on a highly modular structure; however, this structure has been extended into an object-oriented framework. The framework includes a robust and generalized numeri­cal solution object, which can be used to solve many different types of models. The numerical solution object has several different matrix preconditioning options as well as several methods for solving the linear system of equations. In this new framework, the GWF Model itself is an object as are each of the GWF Model packages. A benefit of the object-oriented structure is that multiple objects of the same type can be used in a single sim­ulation. Thus, a single forward run with MODFLOW 6 may contain multiple GWF Models. GWF Models can be hydraulically connected using GWF-GWF Exchange objects. Connecting GWF models in different ways permits the user to utilize a local grid refinement strategy consisting of parent and child models or to couple adjacent GWF Models. An advantage of the approach implemented in MODFLOW 6 is that multiple models and their exchanges can be incorporated into a single numerical solution object. With this design, models can be tightly coupled at the matrix level.

  11. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  12. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  13. Information Requirements for Integrating Spatially Discrete, Feature-Based Earth Observations

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Aufdenkampe, A. K.; Lehnert, K. A.; Mayorga, E.; Hsu, L.; Song, L.; Zaslavsky, I.; Valentine, D. L.

    2014-12-01

    Several cyberinfrastructures have emerged for sharing observational data collected at densely sampled and/or highly instrumented field sites. These include the CUAHSI Hydrologic Information System (HIS), the Critical Zone Observatory Integrated Data Management System (CZOData), the Integrated Earth Data Applications (IEDA) and EarthChem system, and the Integrated Ocean Observing System (IOOS). These systems rely on standard data encodings and, in some cases, standard semantics for classes of geoscience data. Their focus is on sharing data on the Internet via web services in domain specific encodings or markup languages. While they have made progress in making data available, it still takes investigators significant effort to discover and access datasets from multiple repositories because of inconsistencies in the way domain systems describe, encode, and share data. Yet, there are many scenarios that require efficient integration of these data types across different domains. For example, understanding a soil profile's geochemical response to extreme weather events requires integration of hydrologic and atmospheric time series with geochemical data from soil samples collected over various depth intervals from soil cores or pits at different positions on a landscape. Integrated access to and analysis of data for such studies are hindered because common characteristics of data, including time, location, provenance, methods, and units are described differently within different systems. Integration requires syntactic and semantic translations that can be manual, error-prone, and lossy. We report information requirements identified as part of our work to define an information model for a broad class of earth science data - i.e., spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples. We sought to answer the question: "What information must accompany observational data for them to be archivable and discoverable within a publication system as well as interpretable once retrieved from such a system for analysis and (re)use?" We also describe development of multiple functional schemas (i.e., physical implementations for data storage, transfer, and archival) for the information model that capture the requirements reported here.

  14. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  15. Modeling and control of operator functional state in a unified framework of fuzzy inference petri nets.

    PubMed

    Zhang, Jian-Hua; Xia, Jia-Jun; Garibaldi, Jonathan M; Groumpos, Petros P; Wang, Ru-Bin

    2017-06-01

    In human-machine (HM) hybrid control systems, human operator and machine cooperate to achieve the control objectives. To enhance the overall HM system performance, the discrete manual control task-load by the operator must be dynamically allocated in accordance with continuous-time fluctuation of psychophysiological functional status of the operator, so-called operator functional state (OFS). The behavior of the HM system is hybrid in nature due to the co-existence of discrete task-load (control) variable and continuous operator performance (system output) variable. Petri net is an effective tool for modeling discrete event systems, but for hybrid system involving discrete dynamics, generally Petri net model has to be extended. Instead of using different tools to represent continuous and discrete components of a hybrid system, this paper proposed a method of fuzzy inference Petri nets (FIPN) to represent the HM hybrid system comprising a Mamdani-type fuzzy model of OFS and a logical switching controller in a unified framework, in which the task-load level is dynamically reallocated between the operator and machine based on the model-predicted OFS. Furthermore, this paper used a multi-model approach to predict the operator performance based on three electroencephalographic (EEG) input variables (features) via the Wang-Mendel (WM) fuzzy modeling method. The membership function parameters of fuzzy OFS model for each experimental participant were optimized using artificial bee colony (ABC) evolutionary algorithm. Three performance indices, RMSE, MRE, and EPR, were computed to evaluate the overall modeling accuracy. Experiment data from six participants are analyzed. The results show that the proposed method (FIPN with adaptive task allocation) yields lower breakdown rate (from 14.8% to 3.27%) and higher human performance (from 90.30% to 91.99%). The simulation results of the FIPN-based adaptive HM (AHM) system on six experimental participants demonstrate that the FIPN framework provides an effective way to model and regulate/optimize the OFS in HM hybrid systems composed of continuous-time OFS model and discrete-event switching controller. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. EnvironmentalWaveletTool: Continuous and discrete wavelet analysis and filtering for environmental time series

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Pla, C.; Fernandez-Cortes, A.; Cuezva, S.; Ortiz, J.; Benavente, D.

    2014-10-01

    A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time-period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.

  17. 3D modelling of non-linear visco-elasto-plastic crustal and lithospheric processes using LaMEM

    NASA Astrophysics Data System (ADS)

    Popov, Anton; Kaus, Boris

    2016-04-01

    LaMEM (Lithosphere and Mantle Evolution Model) is a three-dimensional thermo-mechanical numerical code to simulate crustal and lithospheric deformation. The code is based on a staggered finite difference (FDSTAG) discretization in space, which is a stable and very efficient technique to solve the (nearly) incompressible Stokes equations that does not suffer from spurious pressure modes or artificial compressibility (a typical feature of low-order finite element techniques). Higher order finite element methods are more accurate than FDSTAG methods under idealized test cases where the jump in viscosity is exactly aligned with the boundaries of the elements. Yet, geodynamically more realistic cases involve evolving subduction zones, nonlinear rheologies or localized plastic shear bands. In these cases, the viscosity pattern evolves spontaneously during a simulation or even during nonlinear iterations, and the advantages of higher order methods disappear and they all converge with approximately first order accuracy, similar to that of FDSTAG [1]. Yet, since FDSTAG methods have considerably less degrees of freedom than quadratic finite element methods, they require about an order of magnitude less memory for the same number of nodes in 3D which also implies that every matrix-vector multiplication is significantly faster. LaMEM is build on top of the PETSc library and uses the particle-in-cell technique to track material properties, history variables which makes it straightforward to incorporate effects like phase changes or chemistry. An internal free surface is present, together with (simple) erosion and sedimentation processes, and a number of methods are available to import complex geometries into the code (e.g, http://geomio.bitbucket.org). Customized Galerkin coupled geometric multigrid preconditioners are implemented which resulted in a good parallel scalability of the code (we have tested LaMEM on 458'752 cores [2]). Yet, the drawback of using FDSTAG discretizations is that the Jacobian, which is a key component for fast and robust convergence of Newton-Raphson nonlinear iterative solvers, is more difficult to implement than in FE codes and actually results in a larger stencil. Rather than discretizing it explicitly, we therefore developed a matrix-free analytical Jacobian implementation for the coupled sets of momentum, mass, and energy conservation equations, combined with visco-elasto-plastic rheologies. Tests show that for simple nonlinear viscous rheologies there is little advantage of the MF approach over the standard MFFD PETSc approach, but that iterations converge slightly faster if plasticity is present. Results also show that the Newton solver usually converges in a quadratic manner even for pressure-dependent Drucker-Prager rheologies and without harmonic viscosity averaging of plastic and viscous rheologies. Yet, if the timestep is too large (and the model becomes effectively viscoplastic), or if the shear band pattern changes dramatically, stagnation of iterations might occur. This can be remedied with an appropriate regularization, which we discuss. LaMEM is available as open source software. [1] Thielmann, M., May, D.A., and Kaus, B., 2014, Discretization Errors in the Hybrid Finite Element Particle-in-cell Method: Pure and Applied Geophysics,, doi: 10.1007/s00024-014-0808-9. [2] Kaus B.J.P., Popov A.A., Baumann T.S., Püsök A.E., Bauville A., Fernandez N., Collignon M. (2015) Forward and inverse modelling of lithospheric deformation on geological timescales. NIC Symposium 2016 - Proceedings. NIC Series. Vol. 48.

  18. A Fuzzy Expert System for Fault Management of Water Supply Recovery in the ALSS Project

    NASA Technical Reports Server (NTRS)

    Tohala, Vapsi J.

    1998-01-01

    Modeling with a new software is a challenge. CONFIG is a challenge and is design to work with many types of systems in which discrete and continuous processes occur. The CONFIG software was used to model the two subsystem of the Water Recovery system: ICB and TFB. The model worked manually only for water flows with further implementation to be done in the future. Activities in the models are stiff need to be implemented based on testing of the hardware for phase III. More improvements to CONFIG are in progress to make it a more user friendly software.

  19. Using simulation modeling to improve patient flow at an outpatient orthopedic clinic.

    PubMed

    Rohleder, Thomas R; Lewkonia, Peter; Bischak, Diane P; Duffy, Paul; Hendijani, Rosa

    2011-06-01

    We report on the use of discrete event simulation modeling to support process improvements at an orthopedic outpatient clinic. The clinic was effective in treating patients, but waiting time and congestion in the clinic created patient dissatisfaction and staff morale issues. The modeling helped to identify improvement alternatives including optimized staffing levels, better patient scheduling, and an emphasis on staff arriving promptly. Quantitative results from the modeling provided motivation to implement the improvements. Statistical analysis of data taken before and after the implementation indicate that waiting time measures were significantly improved and overall patient time in the clinic was reduced.

  20. Hybrid multiscale modeling and prediction of cancer cell behavior

    PubMed Central

    Habibi, Jafar

    2017-01-01

    Background Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. Methods In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Results Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Conclusion Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset. PMID:28846712

  1. Hybrid multiscale modeling and prediction of cancer cell behavior.

    PubMed

    Zangooei, Mohammad Hossein; Habibi, Jafar

    2017-01-01

    Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  2. Modelling the impacts of new diagnostic tools for tuberculosis in developing countries to enhance policy decisions.

    PubMed

    Langley, Ivor; Doulla, Basra; Lin, Hsien-Ho; Millington, Kerry; Squire, Bertie

    2012-09-01

    The introduction and scale-up of new tools for the diagnosis of Tuberculosis (TB) in developing countries has the potential to make a huge difference to the lives of millions of people living in poverty. To achieve this, policy makers need the information to make the right decisions about which new tools to implement and where in the diagnostic algorithm to apply them most effectively. These decisions are difficult as the new tools are often expensive to implement and use, and the health system and patient impacts uncertain, particularly in developing countries where there is a high burden of TB. The authors demonstrate that a discrete event simulation model could play a significant part in improving and informing these decisions. The feasibility of linking the discrete event simulation to a dynamic epidemiology model is also explored in order to take account of longer term impacts on the incidence of TB. Results from two diagnostic districts in Tanzania are used to illustrate how the approach could be used to improve decisions.

  3. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    NASA Astrophysics Data System (ADS)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  4. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  5. BioNSi: A Discrete Biological Network Simulator Tool.

    PubMed

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  6. A computational approach to extinction events in chemical reaction networks with discrete state spaces.

    PubMed

    Johnston, Matthew D

    2017-12-01

    Recent work of Johnston et al. has produced sufficient conditions on the structure of a chemical reaction network which guarantee that the corresponding discrete state space system exhibits an extinction event. The conditions consist of a series of systems of equalities and inequalities on the edges of a modified reaction network called a domination-expanded reaction network. In this paper, we present a computational implementation of these conditions written in Python and apply the program on examples drawn from the biochemical literature. We also run the program on 458 models from the European Bioinformatics Institute's BioModels Database and report our results. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. When to use discrete event simulation (DES) for the economic evaluation of health technologies? A review and critique of the costs and benefits of DES.

    PubMed

    Karnon, Jonathan; Haji Ali Afzali, Hossein

    2014-06-01

    Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.

  8. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  9. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Frisch, H.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  10. The Livingstone Model of a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.

  11. Compression simulations of plant tissue in 3D using a mass-spring system approach and discrete element method.

    PubMed

    Pieczywek, Piotr M; Zdunek, Artur

    2017-10-18

    A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of neighboring cells is one of the most prevalent ways for tissue to rupture.

  12. Cell-to-Cell Communication Circuits: Quantitative Analysis of Synthetic Logic Gates

    PubMed Central

    Hoffman-Sommer, Marta; Supady, Adriana; Klipp, Edda

    2012-01-01

    One of the goals in the field of synthetic biology is the construction of cellular computation devices that could function in a manner similar to electronic circuits. To this end, attempts are made to create biological systems that function as logic gates. In this work we present a theoretical quantitative analysis of a synthetic cellular logic-gates system, which has been implemented in cells of the yeast Saccharomyces cerevisiae (Regot et al., 2011). It exploits endogenous MAP kinase signaling pathways. The novelty of the system lies in the compartmentalization of the circuit where all basic logic gates are implemented in independent single cells that can then be cultured together to perform complex logic functions. We have constructed kinetic models of the multicellular IDENTITY, NOT, OR, and IMPLIES logic gates, using both deterministic and stochastic frameworks. All necessary model parameters are taken from literature or estimated based on published kinetic data, in such a way that the resulting models correctly capture important dynamic features of the included mitogen-activated protein kinase pathways. We analyze the models in terms of parameter sensitivity and we discuss possible ways of optimizing the system, e.g., by tuning the culture density. We apply a stochastic modeling approach, which simulates the behavior of whole populations of cells and allows us to investigate the noise generated in the system; we find that the gene expression units are the major sources of noise. Finally, the model is used for the design of system modifications: we show how the current system could be transformed to operate on three discrete values. PMID:22934039

  13. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  14. Development of Distinctive Feature Theory.

    ERIC Educational Resources Information Center

    Meyer, Peggy L.

    Since the beginning of man's awareness of his language capabilities and language structure, he has assumed that speech is composed of discrete entities. The linguist attempts to establish a model of the workings of these distinctive sounds in a language. Utilizing an historical basis for discussion, this general survey of the distinctive feature…

  15. Wavelet transforms with discrete-time continuous-dilation wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Rao, Raghuveer M.

    1999-03-01

    Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.

  16. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-07

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.

  18. Implementation of a Smeared Crack Band Model in a Micromechanics Framework

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  19. Dirac Cellular Automaton from Split-step Quantum Walk

    PubMed Central

    Mallick, Arindam; Chandrashekar, C. M.

    2016-01-01

    Simulations of one quantum system by an other has an implication in realization of quantum machine that can imitate any quantum system and solve problems that are not accessible to classical computers. One of the approach to engineer quantum simulations is to discretize the space-time degree of freedom in quantum dynamics and define the quantum cellular automata (QCA), a local unitary update rule on a lattice. Different models of QCA are constructed using set of conditions which are not unique and are not always in implementable configuration on any other system. Dirac Cellular Automata (DCA) is one such model constructed for Dirac Hamiltonian (DH) in free quantum field theory. Here, starting from a split-step discrete-time quantum walk (QW) which is uniquely defined for experimental implementation, we recover the DCA along with all the fine oscillations in position space and bridge the missing connection between DH-DCA-QW. We will present the contribution of the parameters resulting in the fine oscillations on the Zitterbewegung frequency and entanglement. The tuneability of the evolution parameters demonstrated in experimental implementation of QW will establish it as an efficient tool to design quantum simulator and approach quantum field theory from principles of quantum information theory. PMID:27184159

  20. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.

  1. Coupled intertwiner dynamics: A toy model for coupling matter to spin foam models

    NASA Astrophysics Data System (ADS)

    Steinhaus, Sebastian

    2015-09-01

    The universal coupling of matter and gravity is one of the most important features of general relativity. In quantum gravity, in particular spin foams, matter couplings have been defined in the past, yet the mutual dynamics, in particular if matter and gravity are strongly coupled, are hardly explored, which is related to the definition of both matter and gravitational degrees of freedom on the discretization. However, extracting these mutual dynamics is crucial in testing the viability of the spin foam approach and also establishing connections to other discrete approaches such as lattice gauge theories. Therefore, we introduce a simple two-dimensional toy model for Yang-Mills coupled to spin foams, namely an Ising model coupled to so-called intertwiner models defined for SU (2 )k. The two systems are coupled by choosing the Ising coupling constant to depend on spin labels of the background, as these are interpreted as the edge lengths of the discretization. We coarse grain this toy model via tensor network renormalization and uncover an interesting dynamics: the Ising phase transition temperature turns out to be sensitive to the background configurations and conversely, the Ising model can induce phase transitions in the background. Moreover, we observe a strong coupling of both systems if close to both phase transitions.

  2. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  3. Using the Multilayer Free-Surface Flow Model to Solve Wave Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokof’ev, V. A., E-mail: ProkofyevVA@vniig.ru

    2017-01-15

    A method is presented for changing over from a single-layer shallow-water model to a multilayer model with hydrostatic pressure profile and, then, to a multilayer model with nonhydrostatic pressure profile. The method does not require complex procedures for solving the discrete Poisson’s equation and features high computation efficiency. The results of validating the algorithm against experimental data critical for the numerical dissipation of the numerical scheme are presented. Examples are considered.

  4. Structure of random discrete spacetime

    NASA Technical Reports Server (NTRS)

    Brightwell, Graham; Gregory, Ruth

    1991-01-01

    The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.

  5. The structure of random discrete spacetime

    NASA Technical Reports Server (NTRS)

    Brightwell, Graham; Gregory, Ruth

    1990-01-01

    The usual picture of spacetime consists of a continuous manifold, together with a metric of Lorentzian signature which imposes a causal structure on the spacetime. A model, first suggested by Bombelli et al., is considered in which spacetime consists of a discrete set of points taken at random from a manifold, with only the causal structure on this set remaining. This structure constitutes a partially ordered set (or poset). Working from the poset alone, it is shown how to construct a metric on the space which closely approximates the metric on the original spacetime manifold, how to define the effective dimension of the spacetime, and how such quantities may depend on the scale of measurement. Possible desirable features of the model are discussed.

  6. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  7. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  8. Two-parameter double-oscillator model of Mathews-Lakshmanan type: Series solutions and supersymmetric partners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu, E-mail: xbataxel@gmail.com; Wang, Jie, E-mail: wangjie@iun.edu

    2015-07-15

    We obtain series solutions, the discrete spectrum, and supersymmetric partners for a quantum double-oscillator system. Its potential features a superposition of the one-parameter Mathews-Lakshmanan interaction and a one-parameter harmonic or inverse harmonic oscillator contribution. Furthermore, our results are transferred to a generalized Pöschl-Teller model that is isospectral to the double-oscillator system.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altazi, B; Fernandez, D; Zhang, G

    Purpose: Site-specific investigations of the role of Radiomics in cancer diagnosis and therapy are needed. We report of the reproducibility of quantitative image features over different discrete voxel levels in PET/CT images of cervical cancer. Methods: Our dataset consisted of the pretreatment PET/CT scans from a cohort of 76 patients diagnosed with cervical cancer, FIGO stage IB-IVA, age range 31–76 years, treated with external beam radiation therapy to a dose range between 45–50.4 Gy (median dose: 45 Gy), concurrent cisplatin chemotherapy and MRI-based Brachytherapy to a dose of 20–30 Gy (median total dose: 28 Gy). Two board certified radiation oncologistsmore » delineated Metabolic Tumor volume (MTV) for each patient. Radiomics features were extracted based on 32, 64, 128 and 256 discretization levels (DL). The 64 level was chosen to be the reference DL. Features were calculated based on Co-occurrence (COM), Gray Level Size Zone (GLSZM) and Run-Length (RLM) matrices. Mean Percentage Differences (Δ) of features for discrete levels were determined. Normality distribution of Δ was tested using Kolomogorov - Smirnov test. Bland-Altman test was used to investigate differences between feature values measured on different DL. The mean, standard deviation and upper/lower value limits for each pair of DL were calculated. Interclass Correlation Coefficient (ICC) analysis was performed to examine the reliability of repeated measures within the context of the test re-test format. Results: 3 global and 5 regional features out of 48 features showed distribution not significantly different from a normal one. The reproducible features passed the normality test. Only 5 reproducible results were reliable, ICC range 0.7 – 0.99. Conclusion: Most of the radiomics features tested showed sensitivity to voxel level discretization between (32 – 256). Only 4 GLSZM, 3 COM and 1 RLM showed insensitivity towards mentioned discrete levels.« less

  10. Slip Continuity in Explicit Crystal Plasticity Simulations Using Nonlocal Continuum and Semi-discrete Approaches

    DTIC Science & Technology

    2013-01-01

    Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity

  11. Collective coordinates theory for discrete soliton ratchets in the sine-Gordon model

    NASA Astrophysics Data System (ADS)

    Sánchez-Rey, Bernardo; Quintero, Niurka R.; Cuevas-Maraver, Jesús; Alejo, Miguel A.

    2014-10-01

    A collective coordinate theory is developed for soliton ratchets in the damped discrete sine-Gordon model driven by a biharmonic force. An ansatz with two collective coordinates, namely the center and the width of the soliton, is assumed as an approximated solution of the discrete nonlinear equation. The dynamical equations of these two collective coordinates, obtained by means of the generalized travelling wave method, explain the mechanism underlying the soliton ratchet and capture qualitatively all the main features of this phenomenon. The numerical simulation of these equations accounts for the existence of a nonzero depinning threshold, the nonsinusoidal behavior of the average velocity as a function of the relative phase between the harmonics of the driver, the nonmonotonic dependence of the average velocity on the damping, and the existence of nontransporting regimes beyond the depinning threshold. In particular, it provides a good description of the intriguing and complex pattern of subspaces corresponding to different dynamical regimes in parameter space.

  12. Collective coordinates theory for discrete soliton ratchets in the sine-Gordon model.

    PubMed

    Sánchez-Rey, Bernardo; Quintero, Niurka R; Cuevas-Maraver, Jesús; Alejo, Miguel A

    2014-10-01

    A collective coordinate theory is developed for soliton ratchets in the damped discrete sine-Gordon model driven by a biharmonic force. An ansatz with two collective coordinates, namely the center and the width of the soliton, is assumed as an approximated solution of the discrete nonlinear equation. The dynamical equations of these two collective coordinates, obtained by means of the generalized travelling wave method, explain the mechanism underlying the soliton ratchet and capture qualitatively all the main features of this phenomenon. The numerical simulation of these equations accounts for the existence of a nonzero depinning threshold, the nonsinusoidal behavior of the average velocity as a function of the relative phase between the harmonics of the driver, the nonmonotonic dependence of the average velocity on the damping, and the existence of nontransporting regimes beyond the depinning threshold. In particular, it provides a good description of the intriguing and complex pattern of subspaces corresponding to different dynamical regimes in parameter space.

  13. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  14. Meshfree and efficient modeling of swimming cells

    NASA Astrophysics Data System (ADS)

    Gallagher, Meurig T.; Smith, David J.

    2018-05-01

    Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.

  15. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  16. Hydro-mechanical model for wetting/drying and fracture development in geomaterials

    DOE PAGES

    Asahina, D.; Houseworth, J. E.; Birkholzer, J. T.; ...

    2013-12-28

    This study presents a modeling approach for studying hydro-mechanical coupled processes, including fracture development, within geological formations. This is accomplished through the novel linking of two codes: TOUGH2, which is a widely used simulator of subsurface multiphase flow based on the finite volume method; and an implementation of the Rigid-Body-Spring Network (RBSN) method, which provides a discrete (lattice) representation of material elasticity and fracture development. The modeling approach is facilitated by a Voronoi-based discretization technique, capable of representing discrete fracture networks. The TOUGH–RBSN simulator is intended to predict fracture evolution, as well as mass transport through permeable media, under dynamicallymore » changing hydrologic and mechanical conditions. Numerical results are compared with those of two independent studies involving hydro-mechanical coupling: (1) numerical modeling of swelling stress development in bentonite; and (2) experimental study of desiccation cracking in a mining waste. The comparisons show good agreement with respect to moisture content, stress development with changes in pore pressure, and time to crack initiation. Finally, the observed relationship between material thickness and crack patterns (e.g., mean spacing of cracks) is captured by the proposed modeling approach.« less

  17. Discrete-time stability of continuous-time controller designs for large space structures

    NASA Technical Reports Server (NTRS)

    Balas, M. J.

    1982-01-01

    In most of the stable control designs for flexible structures, continuous time is assumed. However, in view of the implementation of the controllers by on-line digital computers, the discrete-time stability of such controllers is an important consideration. In the case of direct-velocity feedback (DVFB), involving negative feedback from collocated force actuators and velocity sensors, it is not immediately apparent how much delay due to digital implementation of DVFB can be tolerated without loss of stability. The present investigation is concerned with such questions. A study is conducted of the discrete-time stability of DVFB, taking into account an employment of Euler's method of approximation of the time derivative. The obtained result gives an indication of the acceptable time-step size for stable digital implementation of DVFB. A result derived in connection with the consideration of the discrete-time stability of stable continuous-time systems provides a general condition under which digital implementation of such a system will remain stable.

  18. Periodic reference tracking control approach for smart material actuators with complex hysteretic characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyong; Hao, Lina; Song, Bo; Yang, Ruiguo; Cao, Ruimin; Cheng, Yu

    2016-10-01

    Micro/nano positioning technologies have been attractive for decades for their various applications in both industrial and scientific fields. The actuators employed in these technologies are typically smart material actuators, which possess inherent hysteresis that may cause systems behave unexpectedly. Periodic reference tracking capability is fundamental for apparatuses such as scanning probe microscope, which employs smart material actuators to generate periodic scanning motion. However, traditional controller such as PID method cannot guarantee accurate fast periodic scanning motion. To tackle this problem and to conduct practical implementation in digital devices, this paper proposes a novel control method named discrete extended unparallel Prandtl-Ishlinskii model based internal model (d-EUPI-IM) control approach. To tackle modeling uncertainties, the robust d-EUPI-IM control approach is investigated, and the associated sufficient stabilizing conditions are derived. The advantages of the proposed controller are: it is designed and represented in discrete form, thus practical for digital devices implementation; the extended unparallel Prandtl-Ishlinskii model can precisely represent forward/inverse complex hysteretic characteristics, thus can reduce modeling uncertainties and benefits controllers design; in addition, the internal model principle based control module can be utilized as a natural oscillator for tackling periodic references tracking problem. The proposed controller was verified through comparative experiments on a piezoelectric actuator platform, and convincing results have been achieved.

  19. Coupled hydromechanical paleoclimate analyses of density-dependant groundwater flow in discretely fractured crystalline rock settings

    NASA Astrophysics Data System (ADS)

    Normani, S. D.; Sykes, J. F.; Jensen, M. R.

    2009-04-01

    A high resolution sub-regional scale (84 km2) density-dependent, fracture zone network groundwater flow model with hydromechanical coupling and pseudo-permafrost, was developed from a larger 5734 km2 regional scale groundwater flow model of a Canadian Shield setting in fractured crystalline rock. The objective of the work is to illustrate aspects of regional and sub-regional groundwater flow that are relevant to the long-term performance of a hypothetical nuclear fuel repository. The discrete fracture dual continuum numerical model FRAC3DVS-OPG was used for all simulations. A discrete fracture zone network model delineated from surface features was superimposed onto an 789887 element flow domain mesh. Orthogonal fracture faces (between adjacent finite element grid blocks) were used to best represent the irregular discrete fracture zone network. The crystalline rock between these structural discontinuities was assigned properties characteristic of those reported for the Canadian Shield at the Underground Research Laboratory at Pinawa, Manitoba. Interconnectivity of permeable fracture features is an important pathway for the possibly relatively rapid migration of average water particles and subsequent reduction in residence times. The multiple 121000 year North American continental scale paleoclimate simulations are provided by W.R. Peltier using the University of Toronto Glacial Systems Model (UofT GSM). Values of ice sheet normal stress, and proglacial lake depth from the UofT GSM are applied to the sub-regional model as surface boundary conditions, using a freshwater head equivalent to the normal stress imposed by the ice sheet at its base. Permafrost depth is applied as a permeability reduction to both three-dimensional grid blocks and fractures that lie within the time varying permafrost zone. Two different paleoclimate simulations are applied to the sub-regional model to investigate the effect on the depth of glacial meltwater migration into the subsurface. In addition, different conceptualizations of fracture permeability with depth, and various hydromechanical loading efficiencies are used to investigate glacial meltwater penetration. The importance of density dependent flow, due to pore waters deep in the Canadian Shield with densities of up to 1200 kg/m3 and total dissolved solids concentrations in excess of 300 g/L, is also illustrated. Performance measures used in the assessment include depth of glacial meltwater penetration using a tracer, and mean life expectancy. Consistent with the findings from isotope and geochemical assessments, the analyses support the conclusion that for the discrete fracture zone and matrix properties simulated in this study, glacial meltwaters would not likely impact a deep geologic repository in a crystalline rock setting.

  20. Phenotypic factor analysis of psychopathology reveals a new body-related transdiagnostic factor.

    PubMed

    Pezzoli, Patrizia; Antfolk, Jan; Santtila, Pekka

    2017-01-01

    Comorbidity challenges the notion of mental disorders as discrete categories. An increasing body of literature shows that symptoms cut across traditional diagnostic boundaries and interact in shaping the latent structure of psychopathology. Using exploratory and confirmatory factor analysis, we reveal the latent sources of covariation among nine measures of psychopathological functioning in a population-based sample of 13024 Finnish twins and their siblings. By implementing unidimensional, multidimensional, second-order, and bifactor models, we illustrate the relationships between observed variables, specific, and general latent factors. We also provide the first investigation to date of measurement invariance of the bifactor model of psychopathology across gender and age groups. Our main result is the identification of a distinct "Body" factor, alongside the previously identified Internalizing and Externalizing factors. We also report relevant cross-disorder associations, especially between body-related psychopathology and trait anger, as well as substantial sex and age differences in observed and latent means. The findings expand the meta-structure of psychopathology, with implications for empirical and clinical practice, and demonstrate shared mechanisms underlying attitudes towards nutrition, self-image, sexuality and anger, with gender- and age-specific features.

  1. Modelling fully-coupled Thermo-Hydro-Mechanical (THM) processes in fractured reservoirs using GOLEM: a massively parallel open-source simulator

    NASA Astrophysics Data System (ADS)

    Jacquey, Antoine; Cacace, Mauro

    2017-04-01

    Utilization of the underground for energy-related purposes have received increasing attention in the last decades as a source for carbon-free energy and for safe storage solutions. Understanding the key processes controlling fluid and heat flow around geological discontinuities such as faults and fractures as well as their mechanical behaviours is therefore of interest in order to design safe and sustainable reservoir operations. These processes occur in a naturally complex geological setting, comprising natural or engineered discrete heterogeneities as faults and fractures, span a relatively large spectrum of temporal and spatial scales and they interact in a highly non-linear fashion. In this regard, numerical simulators have become necessary in geological studies to model coupled processes and complex geological geometries. In this study, we present a new simulator GOLEM, using multiphysics coupling to characterize geological reservoirs. In particular, special attention is given to discrete geological features such as faults and fractures. GOLEM is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE). The MOOSE framework provides a powerful and flexible platform to solve multiphysics problems implicitly and in a tightly coupled manner on unstructured meshes which is of interest for the considered non-linear context. Governing equations in 3D for fluid flow, heat transfer (conductive and advective), saline transport as well as deformation (elastic and plastic) have been implemented into the GOLEM application. Coupling between rock deformation and fluid and heat flow is considered using theories of poroelasticity and thermoelasticity. Furthermore, considering material properties such as density and viscosity and transport properties such as porosity as dependent on the state variables (based on the International Association for the Properties of Water and Steam models) increase the coupling complexity of the problem. The GOLEM application aims therefore at integrating more physical processes observed in the field or in the laboratory to simulate more realistic scenarios. The use of high-level nonlinear solver technology allow us to tackle these complex multiphysics problems in three dimensions. Basic concepts behing the GOLEM simulator will be presented in this study as well as a few application examples to illustrate its main features.

  2. Parallel multiscale simulations of a brain aneurysm

    PubMed Central

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2012-01-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work. PMID:23734066

  3. Parallel multiscale simulations of a brain aneurysm.

    PubMed

    Grinberg, Leopold; Fedosov, Dmitry A; Karniadakis, George Em

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multi-scale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver εκ αr . The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers ( εκ αr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.

  4. Parallel multiscale simulations of a brain aneurysm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm.more » The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future work.« less

  5. A New Search Paradigm for Correlated Neutrino Emission from Discrete GRBs using Antarctic Cherenkov Telescopes in the Swift Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamatikos, Michael; Band, David L.; JCA/UMBC, Baltimore, MD 21250

    2006-05-19

    We describe the theoretical modeling and analysis techniques associated with a preliminary search for correlated neutrino emission from GRB980703a, which triggered the Burst and Transient Source Experiment (BATSE GRB trigger 6891), using archived data from the Antarctic Muon and Neutrino Detector Array (AMANDA-B10). Under the assumption of associated hadronic acceleration, the expected observed neutrino energy flux is directly derived, based upon confronting the fireball phenomenology with the discrete set of observed electromagnetic parameters of GRB980703a, gleaned from ground-based and satellite observations, for four models, corrected for oscillations. Models 1 and 2, based upon spectral analysis featuring a prompt photon energymore » fit to the Band function, utilize an observed spectroscopic redshift, for isotropic and anisotropic emission geometry, respectively. Model 3 is based upon averaged burst parameters, assuming isotropic emission. Model 4 based upon a Band fit, features an estimated redshift from the lag-luminosity relation, with isotropic emission. Consistent with our AMANDA-II analysis of GRB030329, which resulted in a flux upper limit of {approx} 0.150GeV /cm2/s for model 1, we find differences in excess of an order of magnitude in the response of AMANDA-B10, among the various models for GRB980703a. Implications for future searches in the era of Swift and IceCube are discussed.« less

  6. Modeling and control of fuel cell based distributed generation systems

    NASA Astrophysics Data System (ADS)

    Jung, Jin Woo

    This dissertation presents circuit models and control algorithms of fuel cell based distributed generation systems (DGS) for two DGS topologies. In the first topology, each DGS unit utilizes a battery in parallel to the fuel cell in a standalone AC power plant and a grid-interconnection. In the second topology, a Z-source converter, which employs both the L and C passive components and shoot-through zero vectors instead of the conventional DC/DC boost power converter in order to step up the DC-link voltage, is adopted for a standalone AC power supply. In Topology 1, two applications are studied: a standalone power generation (Single DGS Unit and Two DGS Units) and a grid-interconnection. First, dynamic model of the fuel cell is given based on electrochemical process. Second, two full-bridge DC to DC converters are adopted and their controllers are designed: an unidirectional full-bridge DC to DC boost converter for the fuel cell and a bidirectional full-bridge DC to DC buck/boost converter for the battery. Third, for a three-phase DC to AC inverter without or with a Delta/Y transformer, a discrete-time state space circuit model is given and two discrete-time feedback controllers are designed: voltage controller in the outer loop and current controller in the inner loop. And last, for load sharing of two DGS units and power flow control of two DGS units or the DGS connected to the grid, real and reactive power controllers are proposed. Particularly, for the grid-connected DGS application, a synchronization issue between an islanding mode and a paralleling mode to the grid is investigated, and two case studies are performed. To demonstrate the proposed circuit models and control strategies, simulation test-beds using Matlab/Simulink are constructed for each configuration of the fuel cell based DGS with a three-phase AC 120 V (L-N)/60 Hz/50 kVA and various simulation results are presented. In Topology 2, this dissertation presents system modeling, modified space vector PWM implementation (MSVPWM) and design of a closed-loop controller of the Z-source converter which utilizes L and C components and shoot-through zero vectors for the standalone AC power generation. The fuel cell system is modeled by an electrical R-C circuit in order to include slow dynamics of the fuel cells and a voltage-current characteristic of a cell is also considered. A discrete-time state space model is derived to implement digital control and a space vector pulse-width modulation (SVPWM) technique is modified to realize the shoot-through zero vectors that boost the DC-link voltage. Also, three discrete-time feedback controllers are designed: a discrete-time optimal voltage controller, a discrete-time sliding mode current controller, and a discrete-time PI DC-link voltage controller. Furthermore, an asymptotic observer is used to reduce the number of sensors and enhance the reliability of the system. To demonstrate the analyzed circuit model and proposed control strategy, various simulation results using Matlab/Simulink are presented under both light/heavy loads and linear/nonlinear loads for a three-phase AC 208 V (L-L)/60 Hz/10 kVA.

  7. Analysis of the Source Physics Experiment SPE4 Prime Using State-Of Parallel Numerical Tools.

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2015-12-01

    This work describes a methodology used for large scale modeling of wave propagation from underground chemical explosions conducted at the Nevada National Security Site (NNSS) fractured granitic rock. We show that the discrete natures of rock masses as well as the spatial variability of the fabric of rock properties are very important to understand ground motions induced by underground explosions. In order to build a credible conceptual model of the subsurface we integrated the geological, geomechanical and geophysical characterizations conducted during recent test at the NNSS as well as historical data from the characterization during the underground nuclear test conducted at the NNSS. Because detailed site characterization is limited, expensive and, in some instances, impossible we have numerically investigated the effects of the characterization gaps on the overall response of the system. We performed several computational studies to identify the key important geologic features specific to fractured media mainly the joints characterized at the NNSS. We have also explored common key features to both geological environments such as saturation and topography and assess which characteristics affect the most the ground motion in the near-field and in the far-field. Stochastic representation of these features based on the field characterizations has been implemented into LLNL's Geodyn-L hydrocode. Simulations were used to guide site characterization efforts in order to provide the essential data to the modeling community. We validate our computational results by comparing the measured and computed ground motion at various ranges for the recently executed SPE4 prime experiment. We have also conducted a comparative study between SPE4 prime and previous experiments SPE1 and SPE3 to assess similarities and differences and draw conclusions on designing SPE5.

  8. The human dynamic clamp as a paradigm for social interaction.

    PubMed

    Dumas, Guillaume; de Guzman, Gonzalo C; Tognoli, Emmanuelle; Kelso, J A Scott

    2014-09-02

    Social neuroscience has called for new experimental paradigms aimed toward real-time interactions. A distinctive feature of interactions is mutual information exchange: One member of a pair changes in response to the other while simultaneously producing actions that alter the other. Combining mathematical and neurophysiological methods, we introduce a paradigm called the human dynamic clamp (HDC), to directly manipulate the interaction or coupling between a human and a surrogate constructed to behave like a human. Inspired by the dynamic clamp used so productively in cellular neuroscience, the HDC allows a person to interact in real time with a virtual partner itself driven by well-established models of coordination dynamics. People coordinate hand movements with the visually observed movements of a virtual hand, the parameters of which depend on input from the subject's own movements. We demonstrate that HDC can be extended to cover a broad repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by a virtual "teacher." We propose HDC as a general paradigm, best implemented when empirically verified theoretical or mathematical models have been developed in a particular scientific field. The HDC paradigm is powerful because it provides an opportunity to explore parameter ranges and perturbations that are not easily accessible in ordinary human interactions. The HDC not only enables to test the veracity of theoretical models, it also illuminates features that are not always apparent in real-time human social interactions and the brain correlates thereof.

  9. Discrete choice modeling of season choice for Minnesota turkey hunters

    USGS Publications Warehouse

    Schroeder, Susan A.; Fulton, David C.; Cornicelli, Louis; Merchant, Steven S.

    2018-01-01

    Recreational turkey hunting exemplifies the interdisciplinary nature of modern wildlife management. Turkey populations in Minnesota have reached social or biological carrying capacities in many areas, and changes to turkey hunting regulations have been proposed by stakeholders and wildlife managers. This study employed discrete stated choice modeling to enhance understanding of turkey hunter preferences about regulatory alternatives. We distributed mail surveys to 2,500 resident turkey hunters. Results suggest that, compared to season structure and lotteries, additional permits and level of potential interference from other hunters most influenced hunter preferences for regulatory alternatives. Low hunter interference was preferred to moderate or high interference. A second permit issued only to unsuccessful hunters was preferred to no second permit or permits for all hunters. Results suggest that utility is not strictly defined by harvest or an individual's material gain but can involve preference for other outcomes that on the surface do not materially benefit an individual. Discrete stated choice modeling offers wildlife managers an effective way to assess constituent preferences related to new regulations before implementing them. 

  10. Probabilistic Guidance of Swarms using Sequential Convex Programming

    DTIC Science & Technology

    2014-01-01

    quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using

  11. Lessons Learned from using a Livingstone Model to Diagnose a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Sweet, Adam; Bajwa, Anupa

    2003-01-01

    NASA researchers have demonstrated that qualitative, model-based reasoning can be used for fault detection in a Main Propulsion System (MPS), a complex, continuous system. At the heart of this diagnostic system is Livingstone, a discrete, propositional logic-based inference engine. Livingstone comprises a language for specifying a discrete model of the system and a set of algorithms that use the model to track the system's state. Livingstone uses the model to test assumptions about the state of a component - observations from the system are compared with values predicted by the model. The intent of this paper is to summarize some advantages of Livingstone seen through our modeling experience: for instance, flexibility in modeling, speed and maturity. We also describe some shortcomings we perceived in the implementation of Livingstone, such as modeling continuous dynamics and handling of transients. We list some upcoming enhancements to the next version of Livingstone that may resolve some of the current limitations.

  12. The exact fundamental solution for the Benes tracking problem

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2009-05-01

    The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.

  13. Modelling the structural controls of primary kaolinite formation

    NASA Astrophysics Data System (ADS)

    Tierney, R. L.; Glass, H. J.

    2016-09-01

    An abundance of kaolinite was formed within the St. Austell outcrop of the Cornubian batholith in Cornwall, southwest England, by the hydrous dissolution of feldspar crystals. The permeability of Cornish granites is low and alteration acts pervasively from discontinuity features, with montmorillonite recognised as an intermediate assemblage in partially kaolinised material. Structural features allowed fluids to channel through the impermeable granite and pervade deep into the rock. Areas of high structural control are hypothesised to link well with areas of advanced alteration. As kaolinisation results in a loss of competence, we present a method of utilising discontinuity orientations from nearby unaltered granites alongside the local tectonic history to calculate strain rates and delineate a discrete fracture network. Simulation of the discrete fracture network is demonstrated through a case study at Higher Moor, where kaolinite is actively extracted from a pit. Reconciliation of fracture connectivity and permeability against measured subsurface data show that higher values of modelled properties match with advanced kaolinisation observed in the field. This suggests that the technique may be applicable across various industries and disciplines.

  14. Higher-order compositional modeling of three-phase flow in 3D fractured porous media based on cross-flow equilibrium

    NASA Astrophysics Data System (ADS)

    Moortgat, Joachim; Firoozabadi, Abbas

    2013-10-01

    Numerical simulation of multiphase compositional flow in fractured porous media, when all the species can transfer between the phases, is a real challenge. Despite the broad applications in hydrocarbon reservoir engineering and hydrology, a compositional numerical simulator for three-phase flow in fractured media has not appeared in the literature, to the best of our knowledge. In this work, we present a three-phase fully compositional simulator for fractured media, based on higher-order finite element methods. To achieve computational efficiency, we invoke the cross-flow equilibrium (CFE) concept between discrete fractures and a small neighborhood in the matrix blocks. We adopt the mixed hybrid finite element (MHFE) method to approximate convective Darcy fluxes and the pressure equation. This approach is the most natural choice for flow in fractured media. The mass balance equations are discretized by the discontinuous Galerkin (DG) method, which is perhaps the most efficient approach to capture physical discontinuities in phase properties at the matrix-fracture interfaces and at phase boundaries. In this work, we account for gravity and Fickian diffusion. The modeling of capillary effects is discussed in a separate paper. We present the mathematical framework, using the implicit-pressure-explicit-composition (IMPEC) scheme, which facilitates rigorous thermodynamic stability analyses and the computation of phase behavior effects to account for transfer of species between the phases. A deceptively simple CFL condition is implemented to improve numerical stability and accuracy. We provide six numerical examples at both small and larger scales and in two and three dimensions, to demonstrate powerful features of the formulation.

  15. Speech perception at the interface of neurobiology and linguistics.

    PubMed

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  16. GPU accelerated Discrete Element Method (DEM) molecular dynamics for conservative, faceted particle simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spellings, Matthew; Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109; Marson, Ryan L.

    Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method ismore » a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.« less

  17. Fast method for reactor and feature scale coupling in ALD and CVD

    DOEpatents

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2017-08-08

    Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.

  18. Temporal BYY encoding, Markovian state spaces, and space dimension determination.

    PubMed

    Xu, Lei

    2004-09-01

    As a complementary to those temporal coding approaches of the current major stream, this paper aims at the Markovian state space temporal models from the perspective of the temporal Bayesian Ying-Yang (BYY) learning with both new insights and new results on not only the discrete state featured Hidden Markov model and extensions but also the continuous state featured linear state spaces and extensions, especially with a new learning mechanism that makes selection of the state number or the dimension of state space either automatically during adaptive learning or subsequently after learning via model selection criteria obtained from this mechanism. Experiments are demonstrated to show how the proposed approach works.

  19. Compressive-sampling-based positioning in wireless body area networks.

    PubMed

    Banitalebi-Dehkordi, Mehdi; Abouei, Jamshid; Plataniotis, Konstantinos N

    2014-01-01

    Recent achievements in wireless technologies have opened up enormous opportunities for the implementation of ubiquitous health care systems in providing rich contextual information and warning mechanisms against abnormal conditions. This helps with the automatic and remote monitoring/tracking of patients in hospitals and facilitates and with the supervision of fragile, elderly people in their own domestic environment through automatic systems to handle the remote drug delivery. This paper presents a new modeling and analysis framework for the multipatient positioning in a wireless body area network (WBAN) which exploits the spatial sparsity of patients and a sparse fast Fourier transform (FFT)-based feature extraction mechanism for monitoring of patients and for reporting the movement tracking to a central database server containing patient vital information. The main goal of this paper is to achieve a high degree of accuracy and resolution in the patient localization with less computational complexity in the implementation using the compressive sensing theory. We represent the patients' positions as a sparse vector obtained by the discrete segmentation of the patient movement space in a circular grid. To estimate this vector, a compressive-sampling-based two-level FFT (CS-2FFT) feature vector is synthesized for each received signal from the biosensors embedded on the patient's body at each grid point. This feature extraction process benefits in the combination of both short-time and long-time properties of the received signals. The robustness of the proposed CS-2FFT-based algorithm in terms of the average positioning error is numerically evaluated using the realistic parameters in the IEEE 802.15.6-WBAN standard in the presence of additive white Gaussian noise. Due to the circular grid pattern and the CS-2FFT feature extraction method, the proposed scheme represents a significant reduction in the computational complexity, while improving the level of the resolution and the localization accuracy when compared to some classical CS-based positioning algorithms.

  20. A market systems analysis of the U.S. Sport Utility Vehicle market considering frontal crash safety technology and policy.

    PubMed

    Hoffenson, Steven; Frischknecht, Bart D; Papalambros, Panos Y

    2013-01-01

    Active safety features and adjustments to the New Car Assessment Program (NCAP) consumer-information crash tests have the potential to decrease the number of serious traffic injuries each year, according to previous studies. However, literature suggests that risk reductions, particularly in the automotive market, are often accompanied by adjusted consumer risk tolerance, and so these potential safety benefits may not be fully realized due to changes in consumer purchasing or driving behavior. This article approaches safety in the new vehicle market, particularly in the Sport Utility Vehicle and Crossover Utility Vehicle segments, from a market systems perspective. Crash statistics and simulations are used to predict the effects of design and policy changes on occupant crash safety, and discrete choice experiments are conducted to estimate the values consumers place on vehicle attributes. These models are combined in a market simulation that forecasts how consumers respond to the available vehicle alternatives, resulting in predictions of the market share of each vehicle and how the change in fleet mixture influences societal outcomes including injuries, fuel consumption, and firm profits. The model is tested for a scenario where active safety features are implemented across the new vehicle fleet and a scenario where the U.S. frontal NCAP test speed is modified. While results exhibit evidence of consumer risk adjustment, they support adding active safety features and lowering the NCAP frontal test speed, as these changes are predicted to improve the welfare of both firms and society. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Trade-offs Between Command and Control Architectures and Force Capabilities Using Battlespace Awareness

    DTIC Science & Technology

    2014-06-01

    information superiority in Network- centric warfare .34 A brief discussion of the implementation of battlespace awareness is given. The method 3 Figure 2...developing the model used for this study. Lanchester Equations,39 System Dynamics models,40–42 Discrete Event Simulation, and Agent-based models (ABMs) were...popularity in the military modeling community in recent years due to their ability to effectively capture complex interactions in warfare scenarios with many

  2. Structured models of infectious disease: inference with discrete data

    PubMed Central

    Metcalf, C.J.E.; Lessler, J.; Klepac, P.; Morice, A.; Grenfell, B.T.; Bjørnstad, O.N.

    2014-01-01

    The usage of structured population models can make substantial contributions to public health, particularly for infections where clinical outcomes vary over age. There are three theoretical challenges in implementing such analyses: i) developing an appropriate framework that models both demographic and epidemiological transitions; ii) parameterizing the framework, where parameters may be based on data ranging from the biological course of infection, basic patterns of human demography, specific characteristics of population growth, and details of vaccination regimes implemented; and iii) evaluating public health strategies in the face of changing human demography. We illustrate the general approach by developing a model of rubella in Costa Rica. The demographic profile of this infection is a crucial aspect of its public health impact, and we use a transient perturbation analysis to explore the impact of changing human demography on immunization strategies implemented. PMID:22178687

  3. The isolation of spatial patterning modes in a mathematical model of juxtacrine cell signalling.

    PubMed

    O'Dea, R D; King, J R

    2013-06-01

    Juxtacrine signalling mechanisms are known to be crucial in tissue and organ development, leading to spatial patterns in gene expression. We investigate the patterning behaviour of a discrete model of juxtacrine cell signalling due to Owen & Sherratt (1998, Mathematical modelling of juxtacrine cell signalling. Math. Biosci., 153, 125-150) in which ligand molecules, unoccupied receptors and bound ligand-receptor complexes are modelled. Feedback between the ligand and receptor production and the level of bound receptors is incorporated. By isolating two parameters associated with the feedback strength and employing numerical simulation, linear stability and bifurcation analysis, the pattern-forming behaviour of the model is analysed under regimes corresponding to lateral inhibition and induction. Linear analysis of this model fails to capture the patterning behaviour exhibited in numerical simulations. Via bifurcation analysis, we show that since the majority of periodic patterns fold subcritically from the homogeneous steady state, a wide variety of stable patterns exists at a given parameter set, providing an explanation for this failure. The dominant pattern is isolated via numerical simulation. Additionally, by sampling patterns of non-integer wavelength on a discrete mesh, we highlight a disparity between the continuous and discrete representations of signalling mechanisms: in the continuous case, patterns of arbitrary wavelength are possible, while sampling such patterns on a discrete mesh leads to longer wavelength harmonics being selected where the wavelength is rational; in the irrational case, the resulting aperiodic patterns exhibit 'local periodicity', being constructed from distorted stable shorter wavelength patterns. This feature is consistent with experimentally observed patterns, which typically display approximate short-range periodicity with defects.

  4. Modeling and analysis of pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, J. R.

    1986-01-01

    The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).

  5. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.

    PubMed

    Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U

    2006-01-01

    The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.

  6. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.

    PubMed

    Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik

    2012-05-10

    Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/

  7. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    PubMed Central

    2012-01-01

    Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658

  8. Flavored gauge mediation with discrete non-Abelian symmetries

    NASA Astrophysics Data System (ADS)

    Everett, Lisa L.; Garon, Todd S.

    2018-05-01

    We explore the model building and phenomenology of flavored gauge-mediation models of supersymmetry breaking in which the electroweak Higgs doublets and the S U (2 ) messenger doublets are connected by a discrete non-Abelian symmetry. The embedding of the Higgs and messenger fields into representations of this non-Abelian Higgs-messenger symmetry results in specific relations between the Standard Model Yukawa couplings and the messenger-matter Yukawa interactions. Taking the concrete example of an S3 Higgs-messenger symmetry, we demonstrate that, while the minimal implementation of this scenario suffers from a severe μ /Bμ problem that is well known from ordinary gauge mediation, expanding the Higgs-messenger field content allows for the possibility that μ and Bμ can be separately tuned, allowing for the possibility of phenomenologically viable models of the soft supersymmetry-breaking terms. We construct toy examples of this type that are consistent with the observed 125 GeV Higgs boson mass.

  9. SIERRA/Aero Theory Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    2017-09-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  10. SIERRA/Aero Theory Manual Version 4.44

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    2017-04-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  11. A methodological approach for using high-level Petri Nets to model the immune system response.

    PubMed

    Pennisi, Marzio; Cavalieri, Salvatore; Motta, Santo; Pappalardo, Francesco

    2016-12-22

    Mathematical and computational models showed to be a very important support tool for the comprehension of the immune system response against pathogens. Models and simulations allowed to study the immune system behavior, to test biological hypotheses about diseases and infection dynamics, and to improve and optimize novel and existing drugs and vaccines. Continuous models, mainly based on differential equations, usually allow to qualitatively study the system but lack in description; conversely discrete models, such as agent based models and cellular automata, permit to describe in detail entities properties at the cost of losing most qualitative analyses. Petri Nets (PN) are a graphical modeling tool developed to model concurrency and synchronization in distributed systems. Their use has become increasingly marked also thanks to the introduction in the years of many features and extensions which lead to the born of "high level" PN. We propose a novel methodological approach that is based on high level PN, and in particular on Colored Petri Nets (CPN), that can be used to model the immune system response at the cellular scale. To demonstrate the potentiality of the approach we provide a simple model of the humoral immune system response that is able of reproducing some of the most complex well-known features of the adaptive response like memory and specificity features. The methodology we present has advantages of both the two classical approaches based on continuous and discrete models, since it allows to gain good level of granularity in the description of cells behavior without losing the possibility of having a qualitative analysis. Furthermore, the presented methodology based on CPN allows the adoption of the same graphical modeling technique well known to life scientists that use PN for the modeling of signaling pathways. Finally, such an approach may open the floodgates to the realization of multi scale models that integrate both signaling pathways (intra cellular) models and cellular (population) models built upon the same technique and software.

  12. Drainage area characterization for evaluating green infrastructure using the Storm Water Management Model

    NASA Astrophysics Data System (ADS)

    Lee, Joong Gwang; Nietch, Christopher T.; Panguluri, Srinivas

    2018-05-01

    Urban stormwater runoff quantity and quality are strongly dependent upon catchment properties. Models are used to simulate the runoff characteristics, but the output from a stormwater management model is dependent on how the catchment area is subdivided and represented as spatial elements. For green infrastructure modeling, we suggest a discretization method that distinguishes directly connected impervious area (DCIA) from the total impervious area (TIA). Pervious buffers, which receive runoff from upgradient impervious areas should also be identified as a separate subset of the entire pervious area (PA). This separation provides an improved model representation of the runoff process. With these criteria in mind, an approach to spatial discretization for projects using the US Environmental Protection Agency's Storm Water Management Model (SWMM) is demonstrated for the Shayler Crossing watershed (SHC), a well-monitored, residential suburban area occupying 100 ha, east of Cincinnati, Ohio. The model relies on a highly resolved spatial database of urban land cover, stormwater drainage features, and topography. To verify the spatial discretization approach, a hypothetical analysis was conducted. Six different representations of a common urbanscape that discharges runoff to a single storm inlet were evaluated with eight 24 h synthetic storms. This analysis allowed us to select a discretization scheme that balances complexity in model setup with presumed accuracy of the output with respect to the most complex discretization option considered. The balanced approach delineates directly and indirectly connected impervious areas (ICIA), buffering pervious area (BPA) receiving impervious runoff, and the other pervious area within a SWMM subcatchment. It performed well at the watershed scale with minimal calibration effort (Nash-Sutcliffe coefficient = 0.852; R2 = 0.871). The approach accommodates the distribution of runoff contributions from different spatial components and flow pathways that would impact green infrastructure performance. A developed SWMM model using the discretization approach is calibrated by adjusting parameters per land cover component, instead of per subcatchment and, therefore, can be applied to relatively large watersheds if the land cover components are relatively homogeneous and/or categorized appropriately in the GIS that supports the model parameterization. Finally, with a few model adjustments, we show how the simulated stream hydrograph can be separated into the relative contributions from different land cover types and subsurface sources, adding insight to the potential effectiveness of planned green infrastructure scenarios at the watershed scale.

  13. Validation of a RANS transition model using a high-order weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang

    2013-04-01

    A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.

  14. sEMG feature evaluation for identification of elbow angle resolution in graded arm movement.

    PubMed

    Castro, Maria Claudia F; Colombini, Esther L; Aquino, Plinio T; Arjunan, Sridhar P; Kumar, Dinesh K

    2014-11-25

    Automatic and accurate identification of elbow angle from surface electromyogram (sEMG) is essential for myoelectric controlled upper limb exoskeleton systems. This requires appropriate selection of sEMG features, and identifying the limitations of such a system.This study has demonstrated that it is possible to identify three discrete positions of the elbow; full extension, right angle, and mid-way point, with window size of only 200 milliseconds. It was seen that while most features were suitable for this purpose, Power Spectral Density Averages (PSD-Av) performed best. The system correctly classified the sEMG against the elbow angle for 100% cases when only two discrete positions (full extension and elbow at right angle) were considered, while correct classification was 89% when there were three discrete positions. However, sEMG was unable to accurately determine the elbow position when five discrete angles were considered. It was also observed that there was no difference for extension or flexion phases.

  15. Using a new discretization approach to design a delayed LQG controller

    NASA Astrophysics Data System (ADS)

    Haraguchi, M.; Hu, H. Y.

    2008-07-01

    In general, discrete-time controls have become more and more preferable in engineering because of their easy implementation and simple computations. However, the available discretization approaches for the systems having time delays increase the system dimensions and have a high computational cost. This paper presents an effective discretization approach for the continuous-time systems with an input delay. The approach enables one to transform the input-delay system into a delay-free system, but retain the system dimensions unchanged in the state transformation. To demonstrate an application of the approach, this paper presents the design of an LQ regulator for continuous-time systems with an input delay and gives a state observer with a Kalman filter for estimating the full-state vector from some measurements of the system as well. The case studies in the paper well support the efficacy and efficiency of the proposed approach applied to the vibration control of a three-story structure model with the actuator delay taken into account.

  16. Robustness of quantum key distribution with discrete and continuous variables to channel noise

    NASA Astrophysics Data System (ADS)

    Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.

    2017-06-01

    We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.

  17. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  18. Micromechanical Aspects of Hydraulic Fracturing Processes

    NASA Astrophysics Data System (ADS)

    Galindo-torres, S. A.; Behraftar, S.; Scheuermann, A.; Li, L.; Williams, D.

    2014-12-01

    A micromechanical model is developed to simulate the hydraulic fracturing process. The model comprises two key components. Firstly, the solid matrix, assumed as a rock mass with pre-fabricated cracks, is represented by an array of bonded particles simulated by the Discrete Element Model (DEM)[1]. The interaction is ruled by the spheropolyhedra method, which was introduced by the authors previously and has been shown to realistically represent many of the features found in fracturing and communition processes. The second component is the fluid, which is modelled by the Lattice Boltzmann Method (LBM). It was recently coupled with the spheropolyhedra by the authors and validated. An advantage of this coupled LBM-DEM model is the control of many of the parameters of the fracturing fluid, such as its viscosity and the injection rate. To the best of the authors' knowledge this is the first application of such a coupled scheme for studying hydraulic fracturing[2]. In this first implementation, results are presented for a two-dimensional situation. Fig. 1 shows one snapshot of the LBM-DEM coupled simulation for the hydraulic fracturing where the elements with broken bonds can be identified and the fracture geometry quantified. The simulation involves a variation of the underground stress, particularly the difference between the two principal components of the stress tensor, to explore the effect on the fracture path. A second study focuses on the fluid viscosity to examine the effect of the time scales of different injection plans on the fracture geometry. The developed tool and the presented results have important implications for future studies of the hydraulic fracturing process and technology. references 1. Galindo-Torres, S.A., et al., Breaking processes in three-dimensional bonded granular materials with general shapes. Computer Physics Communications, 2012. 183(2): p. 266-277. 2. Galindo-Torres, S.A., A coupled Discrete Element Lattice Boltzmann Method for the simulation of fluid-solid interaction with particles of general shapes. Computer Methods in Applied Mechanics and Engineering, 2013. 265(0): p. 107-119.

  19. Modeling of Mutiscale Electromagnetic Magnetosphere-Ionosphere Interactions near Discrete Auroral Arcs Observed by the MICA Sounding Rocket

    NASA Astrophysics Data System (ADS)

    Streltsov, A. V.; Lynch, K. A.; Fernandes, P. A.; Miceli, R.; Hampton, D. L.; Michell, R. G.; Samara, M.

    2012-12-01

    The MICA (Magnetosphere-Ionosphere Coupling in the Alfvén Resonator) sounding rocket was launched from Poker Flat on February 19, 2012. The rocket was aimed into the system of discrete auroral arcs and during its flight it detected small-scale electromagnetic disturbances with characteristic features of dispersive Alfvén waves. We report results from numerical modeling of these observations. Our simulations are based on a two-fluid MHD model describing multi-scale interactions between magnetic field-aligned currents carried by shear Alfven waves and the ionosphere. The results from our simulations suggest that the small-scale electromagnetic structures measured by MICA indeed can be interpreted as dispersive Alfvén waves generated by the active ionospheric response (ionopspheric feedback instability) inside the large-scale downward magnetic field-aligned current interacting with the ionosphere.

  20. Spin foam models for quantum gravity

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    The definition of a quantum theory of gravity is explored following Feynman's path-integral approach. The aim is to construct a well defined version of the Wheeler-Misner- Hawking ``sum over four geometries'' formulation of quantum general relativity (GR). This is done by means of exploiting the similarities between the formulation of GR in terms of tetrad-connection variables (Palatini formulation) and a simpler theory called BF theory. One can go from BF theory to GR by imposing certain constraints on the BF-theory configurations. BF theory contains only global degrees of freedom (topological theory) and it can be exactly quantized á la Feynman introducing a discretization of the manifold. Using the path integral for BF theory we define a path integration for GR imposing the BF-to-GR constraints on the BF measure. The infinite degrees of freedom of gravity are restored in the process, and the restriction to a single discretization introduces a cut- off in the summed-over configurations. In order to capture all the degrees of freedom a sum over discretization is implemented. Both the implementation of the BF-to-GR constraints and the sum over discretizations are obtained by means of the introduction of an auxiliary field theory (AFT). 4-geometries in the path integral for GR are given by the Feynman diagrams of the AFT which is in this sense dual to GR. Feynman diagrams correspond to 2-complexes labeled by unitary irreducible representations of the internal gauge group (corresponding to tetrad rotation in the connection to GR). A model for 4-dimensional Euclidean quantum gravity (QG) is defined which corresponds to a different normalization of the Barrett-Crane model. The model is perturbatively finite; divergences appearing in the Barrett-Crane model are cured by the new normalization. We extend our techniques to the Lorentzian sector, where we define two models for four-dimensional QG. The first one contains only time-like representations and is shown to be perturbatively finite. The second model contains both time-like and space-like representations. The spectrum of geometrical operators coincide with the prediction of the canonical approach of loop QG. At the moment, the convergence properties of the model are less understood and remain for future investigation.

  1. Disaster Response Modeling Through Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jeffrey; Gilmer, Graham

    2012-01-01

    Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.

  2. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  3. Modeling molecular mechanisms in the axon

    NASA Astrophysics Data System (ADS)

    de Rooij, R.; Miller, K. E.; Kuhl, E.

    2017-03-01

    Axons are living systems that display highly dynamic changes in stiffness, viscosity, and internal stress. However, the mechanistic origin of these phenomenological properties remains elusive. Here we establish a computational mechanics model that interprets cellular-level characteristics as emergent properties from molecular-level events. We create an axon model of discrete microtubules, which are connected to neighboring microtubules via discrete crosslinking mechanisms that obey a set of simple rules. We explore two types of mechanisms: passive and active crosslinking. Our passive and active simulations suggest that the stiffness and viscosity of the axon increase linearly with the crosslink density, and that both are highly sensitive to the crosslink detachment and reattachment times. Our model explains how active crosslinking with dynein motors generates internal stresses and actively drives axon elongation. We anticipate that our model will allow us to probe a wide variety of molecular phenomena—both in isolation and in interaction—to explore emergent cellular-level features under physiological and pathological conditions.

  4. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    PubMed

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  5. A health economic model for the development and evaluation of innovations in aged care: an application to consumer-directed care-study protocol.

    PubMed

    Ratcliffe, Julie; Lancsar, Emily; Luszcz, Mary; Crotty, Maria; Gray, Len; Paterson, Jan; Cameron, Ian D

    2014-06-25

    Consumer-directed care is currently being embraced within Australia and internationally as a means of promoting autonomy and choice in the delivery of health and aged care services. Despite its wide proliferation little research has been conducted to date to assess the views and preferences of older people for consumer-directed care or to assess the costs and benefits of such an approach relative to existing models of service delivery. A comprehensive health economic model will be developed and applied to the evolution, implementation and evaluation of consumer-directed care in an Australian community aged care setting. A mixed methods approach comprising qualitative interviews and a discrete choice experiment will determine the attitudes and preferences of older people and their informal carers for consumer-directed care. The results of the qualitative interviews and the discrete choice experiment will inform the introduction of a new consumer-directed care innovation in service delivery. The cost-effectiveness of consumer-directed care will be evaluated by comparing incremental changes in resource use, costs and health and quality of life outcomes relative to traditional services. The discrete choice experiment will be repeated at the end of the implementation period to determine the extent to which attitudes and preferences change as a consequence of experience of consumer-directed care. The proposed framework will have wide applicability in the future development and economic evaluation of new innovations across the health and aged care sectors. The study is approved by Flinders University Social and Behavioural Research Ethics Committee (Project No. 6114/SBREC). Findings from the qualitative interviews, discrete choice experiments and the economic evaluation will be reported at a workshop of stakeholders to be held in 2015 and will be documented in reports and in peer reviewed journal articles. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  7. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358

  8. Matter-wave solitons supported by quadrupole-quadrupole interactions and anisotropic discrete lattices

    NASA Astrophysics Data System (ADS)

    Zhong, Rong-Xuan; Huang, Nan; Li, Huang-Wu; He, He-Xiang; Lü, Jian-Tao; Huang, Chun-Qing; Chen, Zhao-Pin

    2018-04-01

    We numerically and analytically investigate the formations and features of two-dimensional discrete Bose-Einstein condensate solitons, which are constructed by quadrupole-quadrupole interactional particles trapped in the tunable anisotropic discrete optical lattices. The square optical lattices in the model can be formed by two pairs of interfering plane waves with different intensities. Two hopping rates of the particles in the orthogonal directions are different, which gives rise to a linear anisotropic system. We find that if all of the pairs of dipole and anti-dipole are perpendicular to the lattice panel and the line connecting the dipole and anti-dipole which compose the quadrupole is parallel to horizontal direction, both the linear anisotropy and the nonlocal nonlinear one can strongly influence the formations of the solitons. There exist three patterns of stable solitons, namely horizontal elongation quasi-one-dimensional discrete solitons, disk-shape isotropic pattern solitons and vertical elongation quasi-continuous solitons. We systematically demonstrate the relationships of chemical potential, size and shape of the soliton with its total norm and vertical hopping rate and analytically reveal the linear dispersion relation for quasi-one-dimensional discrete solitons.

  9. Nonlinear Light Dynamics in Multi-Core Structures

    DTIC Science & Technology

    2017-02-27

    be generated in continuous- discrete optical media such as multi-core optical fiber or waveguide arrays; localisation dynamics in a continuous... discrete nonlinear system. Detailed theoretical analysis is presented of the existence and stability of the discrete -continuous light bullets using a very...and pulse compression using wave collapse (self-focusing) energy localisation dynamics in a continuous- discrete nonlinear system, as implemented in a

  10. A minimally-resolved immersed boundary model for reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar

    2013-12-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.

  11. On the nonexistence of degenerate phase-shift discrete solitons in a dNLS nonlocal lattice

    NASA Astrophysics Data System (ADS)

    Penati, T.; Sansottera, M.; Paleari, S.; Koukouloyannis, V.; Kevrekidis, P. G.

    2018-05-01

    We consider a one-dimensional discrete nonlinear Schrödinger (dNLS) model featuring interactions beyond nearest neighbors. We are interested in the existence (or nonexistence) of phase-shift discrete solitons, which correspond to four-site vortex solutions in the standard two-dimensional dNLS model (square lattice), of which this is a simpler variant. Due to the specific choice of lengths of the inter-site interactions, the vortex configurations considered present a degeneracy which causes the standard continuation techniques to be non-applicable. In the present one-dimensional case, the existence of a conserved quantity for the soliton profile (the so-called density current), together with a perturbative construction, leads to the nonexistence of any phase-shift discrete soliton which is at least C2 with respect to the small coupling ɛ, in the limit of vanishing ɛ. If we assume the solution to be only C0 in the same limit of ɛ, nonexistence is instead proved by studying the bifurcation equation of a Lyapunov-Schmidt reduction, expanded to suitably high orders. Specifically, we produce a nonexistence criterion whose efficiency we reveal in the cases of partial and full degeneracy of approximate solutions obtained via a leading order expansion.

  12. Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw

    2002-01-01

    In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.

  13. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  14. On an LAS-integrated soft PLC system based on WorldFIP fieldbus.

    PubMed

    Liang, Geng; Li, Zhijun; Li, Wen; Bai, Yan

    2012-01-01

    Communication efficiency is lowered and real-time performance is not good enough in discrete control based on traditional WorldFIP field intelligent nodes in case that the scale of control in field is large. A soft PLC system based on WorldFIP fieldbus was designed and implemented. Link Activity Scheduler (LAS) was integrated into the system and field intelligent I/O modules acted as networked basic nodes. Discrete control logic was implemented with the LAS-integrated soft PLC system. The proposed system was composed of configuration and supervisory sub-systems and running sub-systems. The configuration and supervisory sub-system was implemented with a personal computer or an industrial personal computer; running subsystems were designed and implemented based on embedded hardware and software systems. Communication and schedule in the running subsystem was implemented with an embedded sub-module; discrete control and system self-diagnosis were implemented with another embedded sub-module. Structure of the proposed system was presented. Methodology for the design of the sub-systems was expounded. Experiments were carried out to evaluate the performance of the proposed system both in discrete and process control by investigating the effect of network data transmission delay induced by the soft PLC in WorldFIP network and CPU workload on resulting control performances. The experimental observations indicated that the proposed system is practically applicable. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Fabry-Perot confocal resonator optical associative memory

    NASA Astrophysics Data System (ADS)

    Burns, Thomas J.; Rogers, Steven K.; Vogel, George A.

    1993-03-01

    A unique optical associative memory architecture is presented that combines the optical processing environment of a Fabry-Perot confocal resonator with the dynamic storage and recall properties of volume holograms. The confocal resonator reduces the size and complexity of previous associative memory architectures by folding a large number of discrete optical components into an integrated, compact optical processing environment. Experimental results demonstrate the system is capable of recalling a complete object from memory when presented with partial information about the object. A Fourier optics model of the system's operation shows it implements a spatially continuous version of a discrete, binary Hopfield neural network associative memory.

  16. Application of Machine Learning to Rotorcraft Health Monitoring

    NASA Technical Reports Server (NTRS)

    Cody, Tyler; Dempsey, Paula J.

    2017-01-01

    Machine learning is a powerful tool for data exploration and model building with large data sets. This project aimed to use machine learning techniques to explore the inherent structure of data from rotorcraft gear tests, relationships between features and damage states, and to build a system for predicting gear health for future rotorcraft transmission applications. Classical machine learning techniques are difficult, if not irresponsible to apply to time series data because many make the assumption of independence between samples. To overcome this, Hidden Markov Models were used to create a binary classifier for identifying scuffing transitions and Recurrent Neural Networks were used to leverage long distance relationships in predicting discrete damage states. When combined in a workflow, where the binary classifier acted as a filter for the fatigue monitor, the system was able to demonstrate accuracy in damage state prediction and scuffing identification. The time dependent nature of the data restricted data exploration to collecting and analyzing data from the model selection process. The limited amount of available data was unable to give useful information, and the division of training and testing sets tended to heavily influence the scores of the models across combinations of features and hyper-parameters. This work built a framework for tracking scuffing and fatigue on streaming data and demonstrates that machine learning has much to offer rotorcraft health monitoring by using Bayesian learning and deep learning methods to capture the time dependent nature of the data. Suggested future work is to implement the framework developed in this project using a larger variety of data sets to test the generalization capabilities of the models and allow for data exploration.

  17. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  18. Spatial analysis of geologic and hydrologic features relating to sinkhole occurrence in Jefferson County, West Virginia

    USGS Publications Warehouse

    Doctor, Daniel H.; Doctor, Katarina Z.

    2012-01-01

    In this study the influence of geologic features related to sinkhole susceptibility was analyzed and the results were mapped for the region of Jefferson County, West Virginia. A model of sinkhole density was constructed using Geographically Weighted Regression (GWR) that estimated the relations among discrete geologic or hydrologic features and sinkhole density at each sinkhole location. Nine conditioning factors on sinkhole occurrence were considered as independent variables: distance to faults, fold axes, fracture traces oriented along bedrock strike, fracture traces oriented across bedrock strike, ponds, streams, springs, quarries, and interpolated depth to groundwater. GWR model parameter estimates for each variable were evaluated for significance, and the results were mapped. The results provide visual insight into the influence of these variables on localized sinkhole density, and can be used to provide an objective means of weighting conditioning factors in models of sinkhole susceptibility or hazard risk.

  19. TOUGH-RBSN simulator for hydraulic fracture propagation within fractured media: Model validations against laboratory experiments

    NASA Astrophysics Data System (ADS)

    Kim, Kunhwi; Rutqvist, Jonny; Nakagawa, Seiji; Birkholzer, Jens

    2017-11-01

    This paper presents coupled hydro-mechanical modeling of hydraulic fracturing processes in complex fractured media using a discrete fracture network (DFN) approach. The individual physical processes in the fracture propagation are represented by separate program modules: the TOUGH2 code for multiphase flow and mass transport based on the finite volume approach; and the rigid-body-spring network (RBSN) model for mechanical and fracture-damage behavior, which are coupled with each other. Fractures are modeled as discrete features, of which the hydrological properties are evaluated from the fracture deformation and aperture change. The verification of the TOUGH-RBSN code is performed against a 2D analytical model for single hydraulic fracture propagation. Subsequently, modeling capabilities for hydraulic fracturing are demonstrated through simulations of laboratory experiments conducted on rock-analogue (soda-lime glass) samples containing a designed network of pre-existing fractures. Sensitivity analyses are also conducted by changing the modeling parameters, such as viscosity of injected fluid, strength of pre-existing fractures, and confining stress conditions. The hydraulic fracturing characteristics attributed to the modeling parameters are investigated through comparisons of the simulation results.

  20. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  1. Using a simulation assistant in modeling manufacturing systems

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.

  2. Improving the Design and Implementation of In-Service Professional Development in Early Childhood Intervention

    ERIC Educational Resources Information Center

    Dunst, Carl J.

    2015-01-01

    A model for designing and implementing evidence-­based in­-service professional development in early childhood intervention as well as the key features of the model are described. The key features include professional development specialist (PDS) description and demonstration of an intervention practice, active and authentic job-­embedded…

  3. Mesoscopic electrohydrodynamic simulations of binary colloidal suspensions.

    PubMed

    Rivas, Nicolas; Frijters, Stefan; Pagonabarraga, Ignacio; Harting, Jens

    2018-04-14

    A model is presented for the solution of electrokinetic phenomena of colloidal suspensions in fluid mixtures. We solve the discrete Boltzmann equation with a Bhatnagar-Gross-Krook collision operator using the lattice Boltzmann method to simulate binary fluid flows. Solvent-solvent and solvent-solute interactions are implemented using a pseudopotential model. The Nernst-Planck equation, describing the kinetics of dissolved ion species, is solved using a finite difference discretization based on the link-flux method. The colloids are resolved on the lattice and coupled to the hydrodynamics and electrokinetics through appropriate boundary conditions. We present the first full integration of these three elements. The model is validated by comparing with known analytic solutions of ionic distributions at fluid interfaces, dielectric droplet deformations, and the electrophoretic mobility of colloidal suspensions. Its possibilities are explored by considering various physical systems, such as breakup of charged and neutral droplets and colloidal dynamics at either planar or spherical fluid interfaces.

  4. Mesoscopic electrohydrodynamic simulations of binary colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Rivas, Nicolas; Frijters, Stefan; Pagonabarraga, Ignacio; Harting, Jens

    2018-04-01

    A model is presented for the solution of electrokinetic phenomena of colloidal suspensions in fluid mixtures. We solve the discrete Boltzmann equation with a Bhatnagar-Gross-Krook collision operator using the lattice Boltzmann method to simulate binary fluid flows. Solvent-solvent and solvent-solute interactions are implemented using a pseudopotential model. The Nernst-Planck equation, describing the kinetics of dissolved ion species, is solved using a finite difference discretization based on the link-flux method. The colloids are resolved on the lattice and coupled to the hydrodynamics and electrokinetics through appropriate boundary conditions. We present the first full integration of these three elements. The model is validated by comparing with known analytic solutions of ionic distributions at fluid interfaces, dielectric droplet deformations, and the electrophoretic mobility of colloidal suspensions. Its possibilities are explored by considering various physical systems, such as breakup of charged and neutral droplets and colloidal dynamics at either planar or spherical fluid interfaces.

  5. Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.

    PubMed

    Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A

    2016-04-01

    The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.

  6. Improved detection of congestive heart failure via probabilistic symbolic pattern recognition and heart rate variability metrics.

    PubMed

    Mahajan, Ruhi; Viangteeravat, Teeradache; Akbilgic, Oguz

    2017-12-01

    A timely diagnosis of congestive heart failure (CHF) is crucial to evade a life-threatening event. This paper presents a novel probabilistic symbol pattern recognition (PSPR) approach to detect CHF in subjects from their cardiac interbeat (R-R) intervals. PSPR discretizes each continuous R-R interval time series by mapping them onto an eight-symbol alphabet and then models the pattern transition behavior in the symbolic representation of the series. The PSPR-based analysis of the discretized series from 107 subjects (69 normal and 38 CHF subjects) yielded discernible features to distinguish normal subjects and subjects with CHF. In addition to PSPR features, we also extracted features using the time-domain heart rate variability measures such as average and standard deviation of R-R intervals. An ensemble of bagged decision trees was used to classify two groups resulting in a five-fold cross-validation accuracy, specificity, and sensitivity of 98.1%, 100%, and 94.7%, respectively. However, a 20% holdout validation yielded an accuracy, specificity, and sensitivity of 99.5%, 100%, and 98.57%, respectively. Results from this study suggest that features obtained with the combination of PSPR and long-term heart rate variability measures can be used in developing automated CHF diagnosis tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  8. Effects of small particle numbers on long-term behaviour in discrete biochemical systems

    PubMed Central

    Ibrahim, Bashar; Dittrich, Peter

    2014-01-01

    Motivation: The functioning of many biological processes depends on the appearance of only a small number of a single molecular species. Additionally, the observation of molecular crowding leads to the insight that even a high number of copies of species do not guarantee their interaction. How single particles contribute to stabilizing biological systems is not well understood yet. Hence, we aim at determining the influence of single molecules on the long-term behaviour of biological systems, i.e. whether they can reach a steady state. Results: We provide theoretical considerations and a tool to analyse Systems Biology Markup Language models for the possibility to stabilize because of the described effects. The theory is an extension of chemical organization theory, which we called discrete chemical organization theory. Furthermore we scanned the BioModels Database for the occurrence of discrete chemical organizations. To exemplify our method, we describe an application to the Template model of the mitotic spindle assembly checkpoint mechanism. Availability and implementation: http://www.biosys.uni-jena.de/Services.html. Contact: bashar.ibrahim@uni-jena.de or dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161236

  9. Lagrangian numerical techniques for modelling multicomponent flow in the presence of large viscosity contrasts: Markers-in-bulk versus Markers-in-chain

    NASA Astrophysics Data System (ADS)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2015-04-01

    Many problems in geodynamic applications may be described as viscous flow of chemically heterogeneous materials. Examples include subduction of compositionally stratified lithospheric plates, folding of rheologically layered rocks, and thermochemical convection of the Earth's mantle. The associated time scales are significantly shorter than that of chemical diffusion, which justifies the commonly featured phenomena in geodynamic flow models termed contact discontinuities. These are spatially sharp interfaces separating regions of different material properties. Numerical modelling of advection of fields with sharp interfaces is challenging. Typical errors include numerical diffusion, which arises due to the repeated action of numerical interpolation. Mathematically, a material field can be represented by discrete indicator functions, whose values are interpreted as logical statements (e.g. whether or not the location is occupied by a given material). Interpolation of a discrete function boils down to determining where in the intermediate node-positions one material ends, and the other begins. The numerical diffusion error thus manifests itself as an erroneous location of the material-interface. Lagrangian advection-schemes are known to be less prone to numerical diffusion errors, compared to their Eulerian counterparts. The tracer-ratio method, where Lagrangian markers are used to discretize the bulk of materials filling the entire domain, is a popular example of such methods. The Stokes equation in this case is solved on a separate, static grid, and in order to do it - material properties must be interpolated from the markers to the grid. This involves the difficulty related to interpolation of discrete fields. The material distribution, and thus material-properties like viscosity and density, seen by the grid is polluted by the interpolation error, which enters the solution of the momentum equation. Errors due to the uncertainty of interface-location can be avoided when using interface tracking methods for advection. Marker-chain method is one such approach, where rather than discretizing the volume of each material, only their interface is discretized by a connected set of markers. Together with the boundary of the domain, the marker-chain constitutes closed polygon-boundaries which enclose the regions spanned by each material. Communicating material properties to the static grid can be done by determining which polygon each grid-node (or integration point) falls into, eliminating the need for interpolation. In our chosen implementation, an efficient parallelized algorithm for the point-in-polygon location is used, so this part of the code takes up only a small fraction of the CPU-time spent on each time step, and allows for spatial resolution of the compositional field beyond that which is practical with markers-in-bulk methods. An additional advantage of using marker-chains for material advection is that it offers a possibility to use some of its markers, or even edges, to generate a FEM grid. One can tailor a grid for obtaining a Stokes solution with optimal accuracy, while controlling the quality and size of its elements. Where geometry of the interface allows - element-edges may be aligned with it, which is known to significantly improve the quality of Stokes solution, compared to when the interface cuts through the elements (Moresi et al., 1996; Deubelbeiss and Kaus, 2008). In more geometrically complex interface-regions, the grid may simply be refined to reduce the error. As materials get deformed in the course of a simulation, the interface may get stretched and entangled. Addition of new markers along the chain may be required in order to properly resolve the increasingly complicated geometry. Conversely, some markers may be removed from regions where they get clustered. Such resampling of the interface requires additional computational effort (although small compared to other parts of the code), and introduces an error in the interface-location (similar to numerical diffusion). Our implementation of this procedure, which utilizes an auxiliary high-resolution structured grid, allows a high degree of control on the magnitude of this error, although cannot eliminate it completely. We will present our chosen numerical implementation of the markers-in-bulk and markers-in-chain methods outlined above, together with the simulation results of the especially designed benchmarks that demonstrate the relative successes and limitations of these methods.

  10. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  11. Discrete Mathematics across the Curriculum, K-12. 1991 Yearbook.

    ERIC Educational Resources Information Center

    Kenney, Margaret J., Ed.; Hirsch, Christian R., Ed.

    This yearbook provides the mathematics education community with specific perceptions about discrete mathematics concerning its importance, its composition at various grade levels, and ideas about how to teach it. Many practical suggestions with respect to the implementation of a discrete mathematics school program are included. A unifying thread…

  12. Evaluation of the Utility of a Discrete-Trial Functional Analysis in Early Intervention Classrooms

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Fisher, Wayne W.; Paden, Amber; Dickes, Nitasha

    2013-01-01

    We evaluated a discrete-trial functional analysis implemented by regular classroom staff in a classroom setting. The results suggest that the discrete-trial functional analysis identified a social function for each participant and may require fewer staff than standard functional analysis procedures.

  13. 78 FR 12459 - State Implementation Plans: Response to Petition for Rulemaking; Findings of Substantial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Director's Discretion Exemptions 4. Substantial Inadequacy of Improper Enforcement Discretion Provisions 5... Enforcement Discretion Provisions 4. Adequacy of Affirmative Defense Provisions 5. Affirmative Defense... in EPA Region V 1. Illinois 2. Indiana 3. Michigan 4. Minnesota 5. Ohio G. Affected States in EPA...

  14. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  15. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    PubMed

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  16. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  17. Large-eddy simulation of turbulent cavitating flow in a micro channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.

    2014-08-15

    Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less

  18. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  19. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  20. Finite Element Aircraft Simulation of Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E.

    1997-01-01

    A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.

  1. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  2. Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model

    NASA Astrophysics Data System (ADS)

    Dordevich, Milorad C. W.

    This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.

  3. Impact of a Teacher-as-Coach Model: Improving Paraprofessionals Fidelity of Implementation of Discrete Trial Training for Students with Moderate-to-Severe Developmental Disabilities

    ERIC Educational Resources Information Center

    Mason, Rose A.; Schnitz, Alana G.; Wills, Howard P.; Rosenbloom, Raia; Kamps, Debra M.; Bast, Darcey

    2017-01-01

    Ensuring educational progress for students with moderate-to-severe developmental disabilities requires exposure to well executed evidence-based practices. This necessitates that the special education workforce, including paraprofessionals, be well-trained. Yet evidence regarding effective training mechanisms for paraprofessionals is limited. A…

  4. Efficacy of Individualized Clinical Coaching in a Virtual Reality Classroom for Increasing Teachers' Fidelity of Implementation of Discrete Trial Teaching

    ERIC Educational Resources Information Center

    Garland, Krista Vince; Vasquez, Eleazar, III; Pearl, Cynthia

    2012-01-01

    Discrete-trials teaching (DTT) is an evidence-based practice used in educational programs for children with autism spectrum disorders (ASD). Although there is strong demand for preparing teachers to effectively implement DTT, there is a scarcity of published research on such studies. A multiple baseline across participants design was utilized to…

  5. S3D: An interactive surface grid generation tool

    NASA Technical Reports Server (NTRS)

    Luh, Raymond Ching-Chung; Pierce, Lawrence E.; Yip, David

    1992-01-01

    S3D, an interactive software tool for surface grid generation, is described. S3D provides the means with which a geometry definition based either on a discretized curve set or a rectangular set can be quickly processed towards the generation of a surface grid for computational fluid dynamics (CFD) applications. This is made possible as a result of implementing commonly encountered surface gridding tasks in an environment with a highly efficient and user friendly graphical interface. Some of the more advanced features of S3D include surface-surface intersections, optimized surface domain decomposition and recomposition, and automated propagation of edge distributions to surrounding grids.

  6. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  7. The human dynamic clamp as a paradigm for social interaction

    PubMed Central

    Dumas, Guillaume; de Guzman, Gonzalo C.; Tognoli, Emmanuelle; Kelso, J. A. Scott

    2014-01-01

    Social neuroscience has called for new experimental paradigms aimed toward real-time interactions. A distinctive feature of interactions is mutual information exchange: One member of a pair changes in response to the other while simultaneously producing actions that alter the other. Combining mathematical and neurophysiological methods, we introduce a paradigm called the human dynamic clamp (HDC), to directly manipulate the interaction or coupling between a human and a surrogate constructed to behave like a human. Inspired by the dynamic clamp used so productively in cellular neuroscience, the HDC allows a person to interact in real time with a virtual partner itself driven by well-established models of coordination dynamics. People coordinate hand movements with the visually observed movements of a virtual hand, the parameters of which depend on input from the subject’s own movements. We demonstrate that HDC can be extended to cover a broad repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by a virtual “teacher.” We propose HDC as a general paradigm, best implemented when empirically verified theoretical or mathematical models have been developed in a particular scientific field. The HDC paradigm is powerful because it provides an opportunity to explore parameter ranges and perturbations that are not easily accessible in ordinary human interactions. The HDC not only enables to test the veracity of theoretical models, it also illuminates features that are not always apparent in real-time human social interactions and the brain correlates thereof. PMID:25114256

  8. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  9. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    PubMed

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  10. Spectral method for a kinetic swarming model

    DOE PAGES

    Gamba, Irene M.; Haack, Jeffrey R.; Motsch, Sebastien

    2015-04-28

    Here we present the first numerical method for a kinetic description of the Vicsek swarming model. The kinetic model poses a unique challenge, as there is a distribution dependent collision invariant to satisfy when computing the interaction term. We use a spectral representation linked with a discrete constrained optimization to compute these interactions. To test the numerical scheme we investigate the kinetic model at different scales and compare the solution with the microscopic and macroscopic descriptions of the Vicsek model. Lastly, we observe that the kinetic model captures key features such as vortex formation and traveling waves.

  11. Energy-modeled flight in a wind field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, M.A.; Cliff, E.M.

    Optimal shaping of aerospace trajectories has provided the motivation for much modern study of optimization theory and algorithms. Current industrial practice favors approaches where the continuous-time optimal control problem is transcribed to a finite-dimensional nonlinear programming problem (NLP) by a discretization process. Two such formulations are implemented in the POST and the OTIS codes. In the present paper we use a discretization that is specially adapted to the flight problem of interest. Among the unique aspects of the present discretization are: a least-squares formulation for certain kinematic constraints; the use of an energy ideas to enforce Newton`s Laws; and, themore » inclusion of large magnitude horizontal winds. In the next section we shall provide a description of the flight problem and its NLP representation. Following this we provide some details of the constraint formulation. Finally, we present an overview of the NLP problem.« less

  12. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  13. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  14. Identification of cascade water tanks using a PWARX model

    NASA Astrophysics Data System (ADS)

    Mattsson, Per; Zachariah, Dave; Stoica, Petre

    2018-06-01

    In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.

  15. Automated surgical skill assessment in RMIS training.

    PubMed

    Zia, Aneeq; Essa, Irfan

    2018-05-01

    Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons' schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating 'task highlights' which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data-sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.

  16. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  17. Detecting opinion spams through supervised boosting approach.

    PubMed

    Hazim, Mohamad; Anuar, Nor Badrul; Ab Razak, Mohd Faizal; Abdullah, Nor Aniza

    2018-01-01

    Product reviews are the individual's opinions, judgement or belief about a certain product or service provided by certain companies. Such reviews serve as guides for these companies to plan and monitor their business ventures in terms of increasing productivity or enhancing their product/service qualities. Product reviews can also increase business profits by convincing future customers about the products which they have interest in. In the mobile application marketplace such as Google Playstore, reviews and star ratings are used as indicators of the application quality. However, among all these reviews, hereby also known as opinions, spams also exist, to disrupt the online business balance. Previous studies used the time series and neural network approach (which require a lot of computational power) to detect these opinion spams. However, the detection performance can be restricted in terms of accuracy because the approach focusses on basic, discrete and document level features only thereby, projecting little statistical relationships. Aiming to improve the detection of opinion spams in mobile application marketplace, this study proposes using statistical based features that are modelled through the supervised boosting approach such as the Extreme Gradient Boost (XGBoost) and the Generalized Boosted Regression Model (GBM) to evaluate two multilingual datasets (i.e. English and Malay language). From the evaluation done, it was found that the XGBoost is most suitable for detecting opinion spams in the English dataset while the GBM Gaussian is most suitable for the Malay dataset. The comparative analysis also indicates that the implementation of the proposed statistical based features had achieved a detection accuracy rate of 87.43 per cent on the English dataset and 86.13 per cent on the Malay dataset.

  18. Architecture for Cognitive Networking within NASAs Future Space Communications Infrastructure

    NASA Technical Reports Server (NTRS)

    Clark, Gilbert J., III; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David

    2016-01-01

    Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, MEO, GEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes architecture features of cognitive networking within the future NASA space communications infrastructure, and interacting with the legacy systems and infrastructure in the meantime. The paper begins by discussing the need for increased automation, including inter-system collaboration. This discussion motivates the features of an architecture including cognitive networking for future missions and relays, interoperating with both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture as a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.

  19. Quantum Walk Schemes for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Underwood, Michael S.

    Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.

  20. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    NASA Technical Reports Server (NTRS)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8.14x8.14xl in shape with geometrical factors of x:y:z=1:1:10, Fabian et al. 2001; Harker et al. 2007). Alternatively, models for forsterite employ statistical methods like the Distribution of Hollow Spheres (Min et al. 2008; Oliveira et al. 2011) or Gaussian Random Spheres (GRS) or RGF (Gielen et al. 200S). Pancakes, hollow spheres, or GRS shapes similar to wheat sheaf crystal habit (e.g., Volten et al. 2001; Veihelmann et al. 2006), however, do not have the sharp edges, flat faces, and vertices seen in images of cometary crystals in interplanetary dust particles (IDPs) or in Stardust samples. Cometary forsterite crystals often have equant or tabular crystal habit (J. Bradley). To simulate cometary crystals, we have computed absorption efficiencies of forsterite using the Discrete Dipole Approximation (DDA) DDSCAT code on NAS supercomputers. We compute thermal models that employ a size distribution of discrete irregularly shaped forsterite crystals (nonspherical shapes with faces and vertices) to explore how crystal shape affects the shape and wavelength positions of the forsterite spectral features and to explore whether cometary crystal shapes support either condensation or annealing scenarios (Lindsay et al. 2012a, b). We find forsterite crystal shapes that best-fit comet Hale-Bopp are tetrahedron, bricks or brick platelets, essentially equant or tabular (Lindsay et al. 2012a,b), commensurate with high temperature condensation experiments (Kobatake et al. 2008). We also have computed porous aggregates with crystal monomers and find that the crystal resonances are amplified. i.e., the crystalline fraction is lower in the aggregate than is derived by fitting a linear mix of spectral features from discrete subcomponents, and the crystal resonances 'appear' to be from larger crystals (Wooden et al. 2012). These results may indicate that the crystalline mass fraction in comets with comae dominated by aggregates may be lower than deduced by popular methods that only emoy ensembles of discrete crystals.

  1. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  2. Discrete Wavelet Transform-Based Whole-Spectral and Subspectral Analysis for Improved Brain Tumor Clustering Using Single Voxel MR Spectroscopy.

    PubMed

    Yang, Guang; Nawaz, Tahir; Barrick, Thomas R; Howe, Franklyn A; Slabaugh, Greg

    2015-12-01

    Many approaches have been considered for automatic grading of brain tumors by means of pattern recognition with magnetic resonance spectroscopy (MRS). Providing an improved technique which can assist clinicians in accurately identifying brain tumor grades is our main objective. The proposed technique, which is based on the discrete wavelet transform (DWT) of whole-spectral or subspectral information of key metabolites, combined with unsupervised learning, inspects the separability of the extracted wavelet features from the MRS signal to aid the clustering. In total, we included 134 short echo time single voxel MRS spectra (SV MRS) in our study that cover normal controls, low grade and high grade tumors. The combination of DWT-based whole-spectral or subspectral analysis and unsupervised clustering achieved an overall clustering accuracy of 94.8% and a balanced error rate of 7.8%. To the best of our knowledge, it is the first study using DWT combined with unsupervised learning to cluster brain SV MRS. Instead of dimensionality reduction on SV MRS or feature selection using model fitting, our study provides an alternative method of extracting features to obtain promising clustering results.

  3. Analysis of water control in an underground mine under strong karst media influence (Vazante mine, Brazil)

    NASA Astrophysics Data System (ADS)

    Ninanya, Hugo; Guiguer, Nilson; Vargas, Eurípedes A.; Nascimento, Gustavo; Araujo, Edmar; Cazarin, Caroline L.

    2018-05-01

    This work presents analysis of groundwater flow conditions and groundwater control measures for Vazante underground mine located in the state of Minas Gerais, Brazil. According to field observations, groundwater flow processes in this mine are highly influenced by the presence of karst features located in the near-surface terrain next to Santa Catarina River. The karstic features, such as caves, sinkholes, dolines and conduits, have direct contact with the aquifer and tend to increase water flow into the mine. These effects are more acute in areas under the influence of groundwater-level drawdown by pumping. Numerical analyses of this condition were carried out using the computer program FEFLOW. This program represents karstic features as one-dimensional discrete flow conduits inside a three-dimensional finite element structure representing the geologic medium following a combined discrete-continuum approach for representing the karst system. These features create preferential flow paths between the river and mine; their incorporation into the model is able to more realistically represent the hydrogeological environment of the mine surroundings. In order to mitigate the water-inflow problems, impermeabilization of the river through construction of a reinforced concrete channel was incorporated in the developed hydrogeological model. Different scenarios for channelization lengths for the most critical zones along the river were studied. Obtained results were able to compare effectiveness of different river channelization scenarios. It was also possible to determine whether the use of these impermeabilization measures would be able to reduce, in large part, the elevated costs of pumping inside the mine.

  4. Optimization of an electrokinetic mixer for microfluidic applications.

    PubMed

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P J

    2012-06-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly.

  5. Optimization of an electrokinetic mixer for microfluidic applications

    PubMed Central

    Bockelmann, Hendryk; Heuveline, Vincent; Barz, Dominik P. J.

    2012-01-01

    This work is concerned with the investigation of the concentration fields in an electrokinetic micromixer and its optimization in order to achieve high mixing rates. The mixing concept is based on the combination of an alternating electrical excitation applied to a pressure-driven base flow in a meandering microchannel geometry. The electrical excitation induces a secondary electrokinetic velocity component, which results in a complex flow field within the meander bends. A mathematical model describing the physicochemical phenomena present within the micromixer is implemented in an in-house finite-element-method code. We first perform simulations comparable to experiments concerned with the investigation of the flow field in the bends. The comparison of the complex flow topology found in simulation and experiment reveals excellent agreement. Hence, the validated model and numerical schemes are employed for a numerical optimization of the micromixer performance. In detail, we optimize the secondary electrokinetic flow by finding the best electrical excitation parameters, i.e., frequency and amplitude, for a given waveform. Two optimized electrical excitations featuring a discrete and a continuous waveform are discussed with respect to characteristic time scales of our mixing problem. The results demonstrate that the micromixer is able to achieve high mixing degrees very rapidly. PMID:22712034

  6. A Discrete Model for Color Naming

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Le Troter, A.; Sequeira, J.; Boi, J. M.

    2006-12-01

    The ability to associate labels to colors is very natural for human beings. Though, this apparently simple task hides very complex and still unsolved problems, spreading over many different disciplines ranging from neurophysiology to psychology and imaging. In this paper, we propose a discrete model for computational color categorization and naming. Starting from the 424 color specimens of the OSA-UCS set, we propose a fuzzy partitioning of the color space. Each of the 11 basic color categories identified by Berlin and Kay is modeled as a fuzzy set whose membership function is implicitly defined by fitting the model to the results of an ad hoc psychophysical experiment (Experiment 1). Each OSA-UCS sample is represented by a feature vector whose components are the memberships to the different categories. The discrete model consists of a three-dimensional Delaunay triangulation of the CIELAB color space which associates each OSA-UCS sample to a vertex of a 3D tetrahedron. Linear interpolation is used to estimate the membership values of any other point in the color space. Model validation is performed both directly, through the comparison of the predicted membership values to the subjective counterparts, as evaluated via another psychophysical test (Experiment 2), and indirectly, through the investigation of its exploitability for image segmentation. The model has proved to be successful in both cases, providing an estimation of the membership values in good agreement with the subjective measures as well as a semantically meaningful color-based segmentation map.

  7. Implementing complex innovations: factors influencing middle manager support.

    PubMed

    Chuang, Emmeline; Jason, Kendra; Morgan, Jennifer Craft

    2011-01-01

    Middle manager resistance is often described as a major challenge for upper-level administrators seeking to implement complex innovations such as evidence-based protocols or new skills training. However, factors influencing middle manager support for innovation implementation are currently understudied in the U.S. health care literature. This article examined the factors that influence middle managers' support for and participation in the implementation of work-based learning, a complex innovation adopted by health care organizations to improve the jobs, educational pathways, skills, and/or credentials of their frontline workers. We conducted semistructured interviews and focus groups with 92 middle managers in 17 health care organizations. Questions focused on understanding middle managers' support for work-based learning as a complex innovation, facilitators and barriers to the implementation process, and the systems changes needed to support the implementation of this innovation. Factors that emerged as influential to middle manager support were similar to those found in broader models of innovation implementation within the health care literature. However, our findings extend previous research by developing an understanding about how middle managers perceived these constructs and by identifying specific strategies for how to influence middle manager support for the innovation implementation process. These findings were generally consistent across different types of health care organizations. Study findings suggest that middle manager support was highest when managers felt the innovation fit their workplace needs and priorities and when they had more discretion and control over how it was implemented. Leaders seeking to implement innovations should consider the interplay between middle managers' control and discretion, their narrow focus on the performance of their own departments or units, and the dedication of staff and other resources for empowering their managers to implement these complex innovations.

  8. Geometrical aspects of patient-specific modelling of the intervertebral disc: collagen fibre orientation and residual stress distribution.

    PubMed

    Marini, Giacomo; Studer, Harald; Huber, Gerd; Püschel, Klaus; Ferguson, Stephen J

    2016-06-01

    Patient-specific modelling of the spine is a powerful tool to explore the prevention and the treatment of injuries and pathologies. Albeit several methods have been proposed for the discretization of the bony structures, the efficient representation of the intervertebral disc anisotropy remains a challenge, especially with complex geometries. Furthermore, the swelling of the disc's nucleus pulposus is normally added to the model after geometry definition, at the cost of changes of the material properties and an unrealistic description of the prestressed state. The aim of this study was to develop techniques, which preserve the patient-specific geometry of the disc and allow the representation of the system anisotropy and residual stresses, independent of the system discretization. Depending on the modelling features, the developed approaches resulted in a response of patient-specific models that was in good agreement with the physiological response observed in corresponding experiments. The proposed methods represent a first step towards the development of patient-specific models of the disc which respect both the geometry and the mechanical properties of the specific disc.

  9. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  10. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  11. Design and implementation of a combined influenza immunization and tuberculosis screening campaign with simulation modelling.

    PubMed

    Heim, Joseph A; Huang, Hao; Zabinsky, Zelda B; Dickerson, Jane; Wellner, Monica; Astion, Michael; Cruz, Doris; Vincent, Jeanne; Jack, Rhona

    2015-08-01

    Design and implement a concurrent campaign of influenza immunization and tuberculosis (TB) screening for health care workers (HCWs) that can reduce the number of clinic visits for each HCW. A discrete-event simulation model was developed to support issues of resource allocation decisions in planning and operations phases. The campaign was compressed to100 days in 2010 and further compressed to 75 days in 2012 and 2013. With more than 5000 HCW arrivals in 2011, 2012 and 2013, the 14-day goal of TB results was achieved for each year and reduced to about 4 days in 2012 and 2013. Implementing a concurrent campaign allows less number of visiting clinics and the compressing of campaign length allows earlier immunization. The support of simulation modelling can provide useful evaluations of different configurations. © 2015 John Wiley & Sons, Ltd.

  12. An analysis of numerical convergence in discrete velocity gas dynamics for internal flows

    NASA Astrophysics Data System (ADS)

    Sekaran, Aarthi; Varghese, Philip; Goldstein, David

    2018-07-01

    The Discrete Velocity Method (DVM) for solving the Boltzmann equation has significant advantages in the modeling of non-equilibrium and near equilibrium flows as compared to other methods in terms of reduced statistical noise, faster solutions and the ability to handle transient flows. Yet the DVM performance for rarefied flow in complex, small-scale geometries, in microelectromechanical (MEMS) devices for instance, is yet to be studied in detail. The present study focuses on the performance of the DVM for locally large Knudsen number flows of argon around sharp corners and other sources for discontinuities in the distribution function. Our analysis details the nature of the solution for some benchmark cases and introduces the concept of solution convergence for the transport terms in the discrete velocity Boltzmann equation. The limiting effects of the velocity space discretization are also investigated and the constraints on obtaining a robust, consistent solution are derived. We propose techniques to maintain solution convergence and demonstrate the implementation of a specific strategy and its effect on the fidelity of the solution for some benchmark cases.

  13. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented an effective optimization framework that incorporates a direct-CAD interface. In this work, we enhance the capabilities of this framework with efficient gradient computations using the discrete adjoint method. We present details of the adjoint numerical implementation, which reuses the domain decomposition, multigrid, and time-marching schemes of the flow solver. Furthermore, we explain and demonstrate the use of CAD in conjunction with the Cartesian adjoint approach. The final paper will contain a number of complex geometry, industrially relevant examples with many design variables to demonstrate the effectiveness of the adjoint method on Cartesian meshes.

  14. Patient flow improvement for an ophthalmic specialist outpatient clinic with aid of discrete event simulation and design of experiment.

    PubMed

    Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon

    2015-06-01

    Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.

  15. A Comparison of Staff Training Methods for Effective Implementation of Discrete Trial Teaching for Learners with Developmental Disabilities

    ERIC Educational Resources Information Center

    Geiger, Kaneen Barbara

    2012-01-01

    Discrete trial teaching is an effective procedure for teaching a variety of skills to children with autism. However, it must be implemented with high integrity to produce optimal learning. Behavioral Skills Training (BST) is a staff training procedure that has been demonstrated to be effective. However, BST is time and labor intensive, and with…

  16. Quantum mechanical/molecular mechanical/continuum style solvation model: linear response theory, variational treatment, and nuclear gradients.

    PubMed

    Li, Hui

    2009-11-14

    Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.

  17. Mesostructural investigation of micron-sized glass particles during shear deformation - An experimental approach vs. DEM simulation

    NASA Astrophysics Data System (ADS)

    Torbahn, Lutz; Weuster, Alexander; Handl, Lisa; Schmidt, Volker; Kwade, Arno; Wolf, Dietrich E.

    2017-06-01

    The interdependency of structure and mechanical features of a cohesive powder packing is on current scientific focus and far from being well understood. Although the Discrete Element Method provides a well applicable and widely used tool to model powder behavior, non-trivial contact mechanics of micron-sized particles demand a sophisticated contact model. Here, a direct comparison between experiment and simulation on a particle level offers a proper approach for model validation. However, the simulation of a full scale shear-tester experiment with micron-sized particles, and hence, validating this simulation remains a challenge. We address this task by down scaling the experimental setup: A fully functional micro shear-tester was developed and implemented into an X-ray tomography device in order to visualize the sample on a bulk and particle level within small bulk volumes of the order of a few micro liter under well-defined consolidation. Using spherical micron-sized particles (30 μm), shear tests with a particle number accessible for simulations can be performed. Moreover, particle level analysis allows for a direct comparison of experimental and numerical results, e.g., regarding structural evolution. In this talk, we focus on density inhomogeneity and shear induced heterogeneity during compaction and shear deformation.

  18. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  19. Convergence of Asymptotic Systems of Non-autonomous Neural Network Models with Infinite Distributed Delays

    NASA Astrophysics Data System (ADS)

    Oliveira, José J.

    2017-10-01

    In this paper, we investigate the global convergence of solutions of non-autonomous Hopfield neural network models with discrete time-varying delays, infinite distributed delays, and possible unbounded coefficient functions. Instead of using Lyapunov functionals, we explore intrinsic features between the non-autonomous systems and their asymptotic systems to ensure the boundedness and global convergence of the solutions of the studied models. Our results are new and complement known results in the literature. The theoretical analysis is illustrated with some examples and numerical simulations.

  20. A discrete mathematical model of the dynamic evolution of a transportation network

    NASA Astrophysics Data System (ADS)

    Malinetskii, G. G.; Stepantsov, M. E.

    2009-09-01

    A dynamic model of the evolution of a transportation network is proposed. The main feature of this model is that the evolution of the transportation network is not a process of centralized transportation optimization. Rather, its dynamic behavior is a result of the system self-organization that occurs in the course of the satisfaction of needs in goods transportation and the evolution of the infrastructure of the network nodes. Nonetheless, the possibility of soft control of the network evolution direction is taken into account.

  1. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  2. A crystal plasticity model for slip in hexagonal close packed metals based on discrete dislocation simulations

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.

    2017-06-01

    This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.

  3. Autonomous learning by simple dynamical systems with a discrete-time formulation

    NASA Astrophysics Data System (ADS)

    Bilen, Agustín M.; Kaluza, Pablo

    2017-05-01

    We present a discrete-time formulation for the autonomous learning conjecture. The main feature of this formulation is the possibility to apply the autonomous learning scheme to systems in which the errors with respect to target functions are not well-defined for all times. This restriction for the evaluation of functionality is a typical feature in systems that need a finite time interval to process a unit piece of information. We illustrate its application on an artificial neural network with feed-forward architecture for classification and a phase oscillator system with synchronization properties. The main characteristics of the discrete-time formulation are shown by constructing these systems with predefined functions.

  4. Quantum walks and wavepacket dynamics on a lattice with twisted photons.

    PubMed

    Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W; Marrucci, Lorenzo

    2015-03-01

    The "quantum walk" has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations.

  5. Quantum walks and wavepacket dynamics on a lattice with twisted photons

    PubMed Central

    Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W.; Marrucci, Lorenzo

    2015-01-01

    The “quantum walk” has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations. PMID:26601157

  6. 10 CFR 35.10 - Implementation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... radioactive material or discrete sources of radium-226 for which a specific medical use license is required by... accelerator-produced radioactive material or discrete sources of radium-226 for which a specific medical use...

  7. Integrative modeling and novel particle swarm-based optimal design of wind farms

    NASA Astrophysics Data System (ADS)

    Chowdhury, Souma

    To meet the energy needs of the future, while seeking to decrease our carbon footprint, a greater penetration of sustainable energy resources such as wind energy is necessary. However, a consistent growth of wind energy (especially in the wake of unfortunate policy changes and reported under-performance of existing projects) calls for a paradigm shift in wind power generation technologies. This dissertation develops a comprehensive methodology to explore, analyze and define the interactions between the key elements of wind farm development, and establish the foundation for designing high-performing wind farms. The primary contribution of this research is the effective quantification of the complex combined influence of wind turbine features, turbine placement, farm-land configuration, nameplate capacity, and wind resource variations on the energy output of the wind farm. A new Particle Swarm Optimization (PSO) algorithm, uniquely capable of preserving population diversity while addressing discrete variables, is also developed to provide powerful solutions towards optimizing wind farm configurations. In conventional wind farm design, the major elements that influence the farm performance are often addressed individually. The failure to fully capture the critical interactions among these factors introduces important inaccuracies in the projected farm performance and leads to suboptimal wind farm planning. In this dissertation, we develop the Unrestricted Wind Farm Layout Optimization (UWFLO) methodology to model and optimize the performance of wind farms. The UWFLO method obviates traditional assumptions regarding (i) turbine placement, (ii) turbine-wind flow interactions, (iii) variation of wind conditions, and (iv) types of turbines (single/multiple) to be installed. The allowance of multiple turbines, which demands complex modeling, is rare in the existing literature. The UWFLO method also significantly advances the state of the art in wind farm optimization by allowing simultaneous optimization of the type and the location of the turbines. Layout optimization (using UWFLO) of a hypothetical 25-turbine commercial-scale wind farm provides a remarkable 4.4% increase in capacity factor compared to a conventional array layout. A further 2% increase in capacity factor is accomplished when the types of turbines are also optimally selected. The scope of turbine selection and placement however depends on the land configuration and the nameplate capacity of the farm. Such dependencies are not clearly defined in the existing literature. We develop response surface-based models, which implicitly employ UWFLO, to quantify and analyze the roles of these other crucial design factors in optimal wind farm planning. The wind pattern at a site can vary significantly from year to year, which is not adequately captured by conventional wind distribution models. The resulting ill-predictability of the annual distribution of wind conditions introduces significant uncertainties in the estimated energy output of the wind farm. A new method is developed to characterize these wind resource uncertainties and model the propagation of these uncertainties into the estimated farm output. The overall wind pattern/regime also varies from one region to another, which demands turbines with capabilities uniquely suited for different wind regimes. Using the UWFLO method, we model the performance potential of currently available turbines for different wind regimes, and quantify their feature-based expected market suitability. Such models can initiate an understanding of the product variation that current turbine manufacturers should pursue, to adequately satisfy the needs of the naturally diverse wind energy market. The wind farm design problems formulated in this dissertation involve highly multimodal objective and constraint functions and a large number of continuous and discrete variables. An effective modification of the PSO algorithm is developed to address such challenging problems. Continuous search, as in conventional PSO, is implemented as the primary search strategy; discrete variables are then updated using a nearest-allowed-discrete-point criterion. Premature stagnation of particles due to loss of population diversity is one of the primary drawbacks of the basic PSO dynamics. A new measure of population diversity is formulated, which unlike existing metrics capture both the overall spread and the distribution of particles in the variable space. This diversity metric is then used to apply (i) an adaptive repulsion away from the best global solution in the case of continuous variables, and (ii) a stochastic update of the discrete variables. The new PSO algorithm provides competitive performance compared to a popular genetic algorithm, when applied to solve a comprehensive set of 98 mixed-integer nonlinear programming problems.

  8. Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2012-01-01

    We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.

  9. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  10. Orienting in virtual environments: How are surface features and environmental geometry weighted in an orientation task?

    PubMed

    Kelly, Debbie M; Bischof, Walter F

    2008-10-01

    We investigated how human adults orient in enclosed virtual environments, when discrete landmark information is not available and participants have to rely on geometric and featural information on the environmental surfaces. In contrast to earlier studies, where, for women, the featural information from discrete landmarks overshadowed the encoding of the geometric information, Experiment 1 showed that when featural information is conjoined with the environmental surfaces, men and women encoded both types of information. Experiment 2 showed that, although both types of information are encoded, performance in locating a goal position is better if it is close to a geometrically or featurally distinct location. Furthermore, although features are relied upon more strongly than geometry, initial experience with an environment influences the relative weighting of featural and geometric cues. Taken together, these results show that human adults use a flexible strategy for encoding spatial information.

  11. Modeling a multivariable reactor and on-line model predictive control.

    PubMed

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  12. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  13. Prediction of Fracture Behavior in Rock and Rock-like Materials Using Discrete Element Models

    NASA Astrophysics Data System (ADS)

    Katsaga, T.; Young, P.

    2009-05-01

    The study of fracture initiation and propagation in heterogeneous materials such as rock and rock-like materials are of principal interest in the field of rock mechanics and rock engineering. It is crucial to study and investigate failure prediction and safety measures in civil and mining structures. Our work offers a practical approach to predict fracture behaviour using discrete element models. In this approach, the microstructures of materials are presented through the combination of clusters of bonded particles with different inter-cluster particle and bond properties, and intra-cluster bond properties. The geometry of clusters is transferred from information available from thin sections, computed tomography (CT) images and other visual presentation of the modeled material using customized AutoCAD built-in dialog- based Visual Basic Application. Exact microstructures of the tested sample, including fractures, faults, inclusions and void spaces can be duplicated in the discrete element models. Although the microstructural fabrics of rocks and rock-like structures may have different scale, fracture formation and propagation through these materials are alike and will follow similar mechanics. Synthetic material provides an excellent condition for validating the modelling approaches, as fracture behaviours are known with the well-defined composite's properties. Calibration of the macro-properties of matrix material and inclusions (aggregates), were followed with the overall mechanical material responses calibration by adjusting the interfacial properties. The discrete element model predicted similar fracture propagation features and path as that of the real sample material. The path of the fractures and matrix-inclusion interaction was compared using computed tomography images. Initiation and fracture formation in the model and real material were compared using Acoustic Emission data. Analysing the temporal and spatial evolution of AE events, collected during the sample testing, in relation to the CT images allows the precise reconstruction of the failure sequence. Our proposed modelling approach illustrates realistic fracture formation and growth predictions at different loading conditions.

  14. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  15. Reply to Comment by Lu et al. on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations"

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming

    2018-01-01

    This document addresses the comments raised by Lu et al. (2017). Lu et al. (2017) proposed an alternative numerical treatment for implementing the fully implicit friction discretization in Xia et al. (2017). The method by Lu et al. (2017) is also effective, but not necessarily easier to implement or more efficient. The numerical wiggles observed by Lu et al. (2017) do not affect the overall solution accuracy of the surface reconstruction method (SRM). SRM introduces an antidiffusion effect, which may also lead to more accurate numerical predictions than hydrostatic reconstruction (HR) but may be the cause of the numerical wiggles. As suggested by Lu et al. (2017), HR may perform equally well if fine enough grids are used, which has been investigated and recognized in the literature. However, the use of refined meshes in simulations will inevitably increase computational cost and the grid sizes as suggested are too small for real-world applications.

  16. Progress with the COGENT Edge Kinetic Code: Implementing the Fokker-Plank Collision Operator

    DOE PAGES

    Dorf, M. A.; Cohen, R. H.; Dorr, M.; ...

    2014-06-20

    Here, COGENT is a continuum gyrokinetic code for edge plasma simulations being developed by the Edge Simulation Laboratory collaboration. The code is distinguished by application of a fourth-order finite-volume (conservative) discretization, and mapped multiblock grid technology to handle the geometric complexity of the tokamak edge. The distribution function F is discretized in v∥ – μ (parallel velocity – magnetic moment) velocity coordinates, and the code presently solves an axisymmetric full-f gyro-kinetic equation coupled to the long-wavelength limit of the gyro-Poisson equation. COGENT capabilities are extended by implementing the fully nonlinear Fokker-Plank operator to model Coulomb collisions in magnetized edge plasmas.more » The corresponding Rosenbluth potentials are computed by making use of a finite-difference scheme and multipole-expansion boundary conditions. Details of the numerical algorithms and results of the initial verification studies are discussed. (© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)« less

  17. Formation Flying Control Implementation in Highly Elliptical Orbits

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Bainum, Peter M.

    2009-01-01

    The Tschauner-Hempel equations are widely used to correct the separation distance drifts between a pair of satellites within a constellation in highly elliptical orbits [1]. This set of equations was discretized in the true anomaly angle [1] to be used in a digital steady-state hierarchical controller [2]. This controller [2] performed the drift correction between a pair of satellites within the constellation. The objective of a discretized system is to develop a simple algorithm to be implemented in the computer onboard the satellite. The main advantage of the discrete systems is that the computational time can be reduced by selecting a suitable sampling interval. For this digital system, the amount of data will depend on the sampling interval in the true anomaly angle [3]. The purpose of this paper is to implement the discrete Tschauner-Hempel equations and the steady-state hierarchical controller in the computer onboard the satellite. This set of equations is expressed in the true anomaly angle in which a relation will be formulated between the time and the true anomaly angle domains.

  18. Novel Methods for Electromagnetic Simulation and Design

    DTIC Science & Technology

    2016-08-03

    The resulting discretized integral equations are compatible with fast multipoleaccelerated solvers and will form the basis for high fidelity...expansion”) which are high-order, efficient and easy to use on arbitrarily triangulated surfaces. The resulting discretized integral equations are...created a user interface compatible with both low and high order discretizations , and implemented the generalized Debye approach of [4]. The

  19. 76 FR 44271 - Approval and Promulgation of Implementation Plans; Texas; Revisions to Permits by Rule and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-25

    ..., 1998, revision creates new section 116.116(f) allowing for the use of Discrete Emission Reduction... allows the use of Discrete Emission Reduction Credits (DERCs) to be used to exceed permit allowables and... credits (called discrete emission reduction credits, or DERCs, in the Texas program) by reducing its...

  20. 76 FR 67600 - Approval and Promulgation of Implementation Plans; Texas; Regulations for Control of Air...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... July 22, 1998, revision allows for the use of Discrete Emission Reduction Credits (DERC) to exceed... creates a new section at 116.116(f) that allows Discrete Emission Reduction Credits (DERCs) to be used to...-term emission credits (called discrete emission reduction credits, or DERCs, in the Texas program) by...

  1. Evaluation of the utility of a discrete-trial functional analysis in early intervention classrooms.

    PubMed

    Kodak, Tiffany; Fisher, Wayne W; Paden, Amber; Dickes, Nitasha

    2013-01-01

    We evaluated a discrete-trial functional analysis implemented by regular classroom staff in a classroom setting. The results suggest that the discrete-trial functional analysis identified a social function for each participant and may require fewer staff than standard functional analysis procedures. © Society for the Experimental Analysis of Behavior.

  2. Conservative discretization of the Landau collision integral

    DOE PAGES

    Hirvijoki, E.; Adams, M. F.

    2017-03-28

    Here we describe a density, momentum-, and energy-conserving discretization of the nonlinear Landau collision integral. The method is suitable for both the finite-element and discontinuous Galerkin methods and does not require structured meshes. The conservation laws for the discretization are proven algebraically and demonstrated numerically for an axially symmetric nonlinear relaxation problem using a finite-element implementation.

  3. Multiscale Concrete Modeling of Aging Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammi, Yousseff; Gullett, Philipp; Horstemeyer, Mark F.

    In this work a numerical finite element framework is implemented to enable the integration of coupled multiscale and multiphysics transport processes. A User Element subroutine (UEL) in Abaqus is used to simultaneously solve stress equilibrium, heat conduction, and multiple diffusion equations for 2D and 3D linear and quadratic elements. Transport processes in concrete structures and their degradation mechanisms are presented along with the discretization of the governing equations. The multiphysics modeling framework is theoretically extended to the linear elastic fracture mechanics (LEFM) by introducing the eXtended Finite Element Method (XFEM) and based on the XFEM user element implementation of Ginermore » et al. [2009]. A damage model that takes into account the damage contribution from the different degradation mechanisms is theoretically developed. The total contribution of damage is forwarded to a Multi-Stage Fatigue (MSF) model to enable the assessment of the fatigue life and the deterioration of reinforced concrete structures in a nuclear power plant. Finally, two examples are presented to illustrate the developed multiphysics user element implementation and the XFEM implementation of Giner et al. [2009].« less

  4. From the Boltzmann to the Lattice-Boltzmann Equation:. Beyond BGK Collision Models

    NASA Astrophysics Data System (ADS)

    Philippi, Paulo Cesar; Hegele, Luiz Adolfo; Surmas, Rodrigo; Siebert, Diogo Nardelli; Dos Santos, Luís Orlando Emerich

    In this work, we present a derivation for the lattice-Boltzmann equation directly from the linearized Boltzmann equation, combining the following main features: multiple relaxation times and thermodynamic consistency in the description of non isothermal compressible flows. The method presented here is based on the discretization of increasingly order kinetic models of the Boltzmann equation. Following a Gross-Jackson procedure, the linearized collision term is developed in Hermite polynomial tensors and the resulting infinite series is diagonalized after a chosen integer N, establishing the order of approximation of the collision term. The velocity space is discretized, in accordance with a quadrature method based on prescribed abscissas (Philippi et al., Phys. Rev E 73, 056702, 2006). The problem of describing the energy transfer is discussed, in relation with the order of approximation of a two relaxation-times lattice Boltzmann model. The velocity-step, temperature-step and the shock tube problems are investigated, adopting lattices with 37, 53 and 81 velocities.

  5. Ab initio folding of proteins using all-atom discrete molecular dynamics

    PubMed Central

    Ding, Feng; Tsao, Douglas; Nie, Huifen; Dokholyan, Nikolay V.

    2008-01-01

    Summary Discrete molecular dynamics (DMD) is a rapid sampling method used in protein folding and aggregation studies. Until now, DMD was used to perform simulations of simplified protein models in conjunction with structure-based force fields. Here, we develop an all-atom protein model and a transferable force field featuring packing, solvation, and environment-dependent hydrogen bond interactions. Using the replica exchange method, we perform folding simulations of six small proteins (20–60 residues) with distinct native structures. In all cases, native or near-native states are reached in simulations. For three small proteins, multiple folding transitions are observed and the computationally-characterized thermodynamics are in quantitative agreement with experiments. The predictive power of all-atom DMD highlights the importance of environment-dependent hydrogen bond interactions in modeling protein folding. The developed approach can be used for accurate and rapid sampling of conformational spaces of proteins and protein-protein complexes, and applied to protein engineering and design of protein-protein interactions. PMID:18611374

  6. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  7. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  8. Microlens array processor with programmable weight mask and direct optical input

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  9. PTBS segmentation scheme for synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Friedland, Noah S.; Rothwell, Brian J.

    1995-07-01

    The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.

  10. Patient Preferences for Features of Health Care Delivery Systems: A Discrete Choice Experiment.

    PubMed

    Mühlbacher, Axel C; Bethge, Susanne; Reed, Shelby D; Schulman, Kevin A

    2016-04-01

    To estimate the relative importance of organizational-, procedural-, and interpersonal-level features of health care delivery systems from the patient perspective. We designed four discrete choice experiments (DCEs) to measure patient preferences for 21 health system attributes. Participants were recruited through the online patient portal of a large health system. We analyzed the DCE data using random effects logit models. DCEs were performed in which respondents were provided with descriptions of alternative scenarios and asked to indicate which scenario they prefer. Respondents were randomly assigned to one of the three possible health scenarios (current health, new lung cancer diagnosis, or diabetes) and asked to complete 15 choice tasks. Each choice task included an annual out-of-pocket cost attribute. A total of 3,900 respondents completed the survey. The out-of-pocket cost attribute was considered the most important across the four different DCEs. Following the cost attribute, trust and respect, multidisciplinary care, and shared decision making were judged as most important. The relative importance of out-of-pocket cost was consistently lower in the hypothetical context of a new lung cancer diagnosis compared with diabetes or the patient's current health. This study demonstrates the complexity of patient decision making processes regarding features of health care delivery systems. Our findings suggest the importance of these features may change as a function of an individual's medical conditions. © Health Research and Educational Trust.

  11. Linking Six Sigma to simulation: a new roadmap to improve the quality of patient care.

    PubMed

    Celano, Giovanni; Costa, Antonio; Fichera, Sergio; Tringali, Giuseppe

    2012-01-01

    Improving the quality of patient care is a challenge that calls for a multidisciplinary approach, embedding a broad spectrum of knowledge and involving healthcare professionals from diverse backgrounds. The purpose of this paper is to present an innovative approach that implements discrete-event simulation (DES) as a decision-supporting tool in the management of Six Sigma quality improvement projects. A roadmap is designed to assist quality practitioners and health care professionals in the design and successful implementation of simulation models within the define-measure-analyse-design-verify (DMADV) or define-measure-analyse-improve-control (DMAIC) Six Sigma procedures. A case regarding the reorganisation of the flow of emergency patients affected by vertigo symptoms was developed in a large town hospital as a preliminary test of the roadmap. The positive feedback from professionals carrying out the project looks promising and encourages further roadmap testing in other clinical settings. The roadmap is a structured procedure that people involved in quality improvement can implement to manage projects based on the analysis and comparison of alternative scenarios. The role of Six Sigma philosophy in improvement of the quality of healthcare services is recognised both by researchers and by quality practitioners; discrete-event simulation models are commonly used to improve the key performance measures of patient care delivery. The two approaches are seldom referenced and implemented together; however, they could be successfully integrated to carry out quality improvement programs. This paper proposes an innovative approach to bridge the gap and enrich the Six Sigma toolbox of quality improvement procedures with DES.

  12. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  13. Discrete ellipsoidal statistical BGK model and Burnett equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  14. ODM2 (Observation Data Model): The EarthChem Use Case

    NASA Astrophysics Data System (ADS)

    Lehnert, Kerstin; Song, Lulin; Hsu, Leslie; Horsburgh, Jeffrey S.; Aufdenkampe, Anthony K.; Mayorga, Emilio; Tarboton, David; Zaslavsky, Ilya

    2014-05-01

    PetDB is an online data system that was created in the late 1990's to serve online a synthesis of published geochemical and petrological data of igneous and metamorphic rocks. PetDB has today reached a volume of 2.5 million analytical values for nearly 70,000 rock samples. PetDB's data model (Lehnert et al., G-Cubed 2000) was designed to store sample-based observational data generated by the analysis of rocks, together with a wide range of metadata documenting provenance of the samples, analytical procedures, data quality, and data source. Attempts to store additional types of geochemical data such as time-series data of seafloor hydrothermal springs and volcanic gases, depth-series data for marine sediments and soils, and mineral or mineral inclusion data revealed the limitations of the schema: the inability to properly record sample hierarchies (for example, a garnet that is included in a diamond that is included in a xenolith that is included in a kimberlite rock sample), inability to properly store time-series data, inability to accommodate classification schemes other than rock lithologies, deficiencies of identifying and documenting datasets that are not part of publications. In order to overcome these deficiencies, PetDB has been developing a new data schema using the ODM2 information model (ODM=Observation Data Model). The development of ODM2 is a collaborative project that leverages the experience of several existing information representations, including PetDB and EarthChem, and the CUAHSI HIS Observations Data Model (ODM), as well as the general specification for encoding observational data called Observations and Measurements (O&M) to develop a uniform information model that seamlessly manages spatially discrete, feature-based earth observations from environmental samples and sample fractions as well as in-situ sensors, and to test its initial implementation in a variety of user scenarios. The O&M model, adopted as an international standard by the Open Geospatial Consortium, and later by ISO, is the foundation of several domain markup languages such as OGC WaterML 2, used for exchanging hydrologic time series. O&M profiles for samples and sample fractions have not been standardized yet, and there is a significant variety in sample data representations used across agencies and academic projects. The intent of the ODM2 project is to create a unified relational representation for different types of spatially discrete observational data, ensuring that the data can be efficiently stored, transferred, catalogued and queried within a variety of earth science applications. We will report on the initial design and implementation of the new model for PetDB, and results of testing the model against a set of common queries. We have explored several aspects of the model, including: semantic consistency, validation and integrity checking, portability and maintainability, query efficiency, and scalability. The sample datasets from PetDB have been loaded in the initial physical implementation for testing. The results of the experiments point to both benefits and challenges of the initial design, and illustrate the key trade-off between the generality of design, ease of interpretation, and query efficiency, especially as the system needs to scale to millions of records.

  15. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  16. Impact of eliminating fracture intersection nodes in multiphase compositional flow simulation

    NASA Astrophysics Data System (ADS)

    Walton, Kenneth M.; Unger, Andre J. A.; Ioannidis, Marios A.; Parker, Beth L.

    2017-04-01

    Algebraic elimination of nodes at discrete fracture intersections via the star-delta technique has proven to be a valuable tool for making multiphase numerical simulations more tractable and efficient. This study examines the assumptions of the star-delta technique and exposes its effects in a 3-D, multiphase context for advective and dispersive/diffusive fluxes. Key issues of relative permeability-saturation-capillary pressure (kr-S-Pc) and capillary barriers at fracture-fracture intersections are discussed. This study uses a multiphase compositional, finite difference numerical model in discrete fracture network (DFN) and discrete fracture-matrix (DFM) modes. It verifies that the numerical model replicates analytical solutions and performs adequately in convergence exercises (conservative and decaying tracer, one and two-phase flow, DFM and DFN domains). The study culminates in simulations of a two-phase laboratory experiment in which a fluid invades a simple fracture intersection. The experiment and simulations evoke different invading fluid flow paths by varying fracture apertures as oil invades water-filled fractures and as water invades air-filled fractures. Results indicate that the node elimination technique as implemented in numerical model correctly reproduces the long-term flow path of the invading fluid, but that short-term temporal effects of the capillary traps and barriers arising from the intersection node are lost.

  17. An accurate front capturing scheme for tumor growth models with a free boundary limit

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan

    2018-07-01

    We consider a class of tumor growth models under the combined effects of density-dependent pressure and cell multiplication, with a free boundary model as its singular limit when the pressure-density relationship becomes highly nonlinear. In particular, the constitutive law connecting pressure p and density ρ is p (ρ) = m/m-1 ρ m - 1, and when m ≫ 1, the cell density ρ may evolve its support according to a pressure-driven geometric motion with sharp interface along its boundary. The nonlinearity and degeneracy in the diffusion bring great challenges in numerical simulations. Prior to the present paper, there is lack of standard mechanism to numerically capture the front propagation speed as m ≫ 1. In this paper, we develop a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation even when the nonlinearity is extremely strong. We show that the semi-discrete scheme naturally connects to the free boundary limit equation as m → ∞. With proper spatial discretization, the fully discrete scheme has improved stability, preserves positivity, and can be implemented without nonlinear solvers. Finally, extensive numerical examples in both one and two dimensions are provided to verify the claimed properties in various applications.

  18. Mechanical discrete simulator of the electro-mechanical lift with n:1 roping

    NASA Astrophysics Data System (ADS)

    Alonso, F. J.; Herrera, I.

    2016-05-01

    The design process of new products in lift engineering is a difficult task due to, mainly, the complexity and slenderness of the lift system, demanding a predictive tool for the lift mechanics. A mechanical ad-hoc discrete simulator, as an alternative to ‘general purpose’ mechanical simulators is proposed. Firstly, the synthesis and experimentation process that has led to establish a suitable model capable of simulating accurately the response of the electromechanical lift is discussed. Then, the equations of motion are derived. The model comprises a discrete system of 5 vertically displaceable masses (car, counterweight, car frame, passengers/loads and lift drive), an inertial mass of the assembly tension pulley-rotor shaft which can rotate about the machine axis and 6 mechanical connectors with 1:1 suspension layout. The model is extended to any n:1 roping lift by setting 6 equivalent mechanical components (suspension systems for car and counterweight, lift drive silent blocks, tension pulley-lift drive stator and passengers/load equivalent spring-damper) by inductive inference from 1:1 and generalized 2:1 roping system. The application to simulate real elevator systems is proposed by numeric time integration of the governing equations using the Kutta-Meden algorithm and implemented in a computer program for ad-hoc elevator simulation called ElevaCAD.

  19. Terrain representation impact on periurban catchment morphological properties

    NASA Astrophysics Data System (ADS)

    Rodriguez, F.; Bocher, E.; Chancibault, K.

    2013-04-01

    SummaryModelling the hydrological behaviour of suburban catchments requires an estimation of environmental features, including land use and hydrographic networks. Suburban areas display a highly heterogeneous composition and encompass many anthropogenic elements that affect water flow paths, such as ditches, sewers, culverts and embankments. The geographical data available, either raster or vector data, may be of various origins and resolutions. Urban databases often offer very detailed data for sewer networks and 3D streets, yet the data covering rural zones may be coarser. This study is intended to highlight the sensitivity of geographical data as well as the data discretisation method used on the essential features of a periurban catchment, i.e. the catchment border and the drainage network. Three methods are implemented for this purpose. The first is the DEM (for digital elevation model) treatment method, which has traditionally been applied in the field of catchment hydrology. The second is based on urban database analysis and focuses on vector data, i.e. polygons and segments. The third method is a TIN (or triangular irregular network), which provides a consistent description of flow directions from an accurate representation of slope. It is assumed herein that the width function is representative of the catchment's hydrological response. The periurban Chézine catchment, located within the Nantes metropolitan area in western France, serves as the case study. The determination of both the main morphological features and the hydrological response of a suburban catchment varies significantly according to the discretization method employed, especially on upstream rural areas. Vector- and TIN-based methods allow representing the higher drainage density of urban areas, and consequently reveal the impact of these areas on the width function, since the DEM method fails. TINs seem to be more appropriate to take streets into account, because it allows a finer representation of topographical discontinuities. These results may help future developments of distributed hydrological models on periurban areas.

  20. CAS2D: FORTRAN program for nonrotating blade-to-blade, steady, potential transonic cascade flows

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    An exact, full-potential-equation (FPE) model for the steady, irrotational, homentropic and homoenergetic flow of a compressible, homocompositional, inviscid fluid through two dimensional planar cascades of airfoils was derived, together with its appropriate boundary conditions. A computer program, CAS2D, was developed that numerically solves an artificially time-dependent form of the actual FPE. The governing equation was discretized by using type-dependent, rotated finite differencing and the finite area technique. The flow field was discretized by providing a boundary-fitted, nonuniform computational mesh. The mesh was generated by using a sequence of conforming mapping, nonorthogonal coordinate stretching, and local, isoparametric, bilinear mapping functions. The discretized form of the FPE was solved iteratively by using successive line overrelaxation. The possible isentropic shocks were correctly captured by adding explicitly an artificial viscosity in a conservative form. In addition, a three-level consecutive, mesh refinement feature makes CAS2D a reliable and fast algorithm for the analysis of transonic, two dimensional cascade flows.

  1. Development of an Integrated Nonlinear Aeroservoelastic Flight Dynamic Model of the NASA Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Ting, Eric

    2018-01-01

    This paper describes a recent development of an integrated fully coupled aeroservoelastic flight dynamic model of the NASA Generic Transport Model (GTM). The integrated model couples nonlinear flight dynamics to a nonlinear aeroelastic model of the GTM. The nonlinearity includes the coupling of the rigid-body aircraft states in the partial derivatives of the aeroelastic angle of attack. Aeroservoelastic modeling of the control surfaces which are modeled by the Variable Camber Continuous Trailing Edge Flap is also conducted. The R.T. Jones' method is implemented to approximate unsteady aerodynamics. Simulations of the GTM are conducted with simulated continuous and discrete gust loads..

  2. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  3. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Hammer, J. M.; Mitchell, C. M.; Morris, N. M.; Lewis, C. M.; Yoon, W. C.

    1985-01-01

    Progress was made in the three following areas. In the rule-based modeling area, two papers related to identification and significane testing of rule-based models were presented. In the area of operator aiding, research focused on aiding operators in novel failure situations; a discrete control modeling approach to aiding PLANT operators was developed; and a set of guidelines were developed for implementing automation. In the area of flight simulator hardware and software, the hardware will be completed within two months and initial simulation software will then be integrated and tested.

  4. Improving the Teaching of Discrete-Event Control Systems Using a LEGO Manufacturing Prototype

    ERIC Educational Resources Information Center

    Sanchez, A.; Bucio, J.

    2012-01-01

    This paper discusses the usefulness of employing LEGO as a teaching-learning aid in a post-graduate-level first course on the control of discrete-event systems (DESs). The final assignment of the course is presented, which asks students to design and implement a modular hierarchical discrete-event supervisor for the coordination layer of a…

  5. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  6. Hybrid modeling in biochemical systems theory by means of functional petri nets.

    PubMed

    Wu, Jialiang; Voit, Eberhard

    2009-02-01

    Many biological systems are genuinely hybrids consisting of interacting discrete and continuous components and processes that often operate at different time scales. It is therefore desirable to create modeling frameworks capable of combining differently structured processes and permitting their analysis over multiple time horizons. During the past 40 years, Biochemical Systems Theory (BST) has been a very successful approach to elucidating metabolic, gene regulatory, and signaling systems. However, its foundation in ordinary differential equations has precluded BST from directly addressing problems containing switches, delays, and stochastic effects. In this study, we extend BST to hybrid modeling within the framework of Hybrid Functional Petri Nets (HFPN). First, we show how the canonical GMA and S-system models in BST can be directly implemented in a standard Petri Net framework. In a second step we demonstrate how to account for different types of time delays as well as for discrete, stochastic, and switching effects. Using representative test cases, we validate the hybrid modeling approach through comparative analyses and simulations with other approaches and highlight the feasibility, quality, and efficiency of the hybrid method.

  7. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  8. A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I

    NASA Astrophysics Data System (ADS)

    Carvalho, Pedro; Rocha, Graça; Hobson, M. P.

    2009-03-01

    A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.

  9. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  10. Kinetic characteristics of debris flows as exemplified by field investigations and discrete element simulation of the catastrophic Jiweishan rockslide, China

    NASA Astrophysics Data System (ADS)

    Zou, Zongxing; Tang, Huiming; Xiong, Chengren; Su, Aijun; Criss, Robert E.

    2017-10-01

    The Jiweishan rockslide of June 5, 2009 in China provides an important opportunity to elucidate the kinetic characteristics of high-speed, long-runout debris flows. A 2D discrete element model whose mechanical parameters were calibrated using basic field data was used to simulate the kinetic behavior of this catastrophic landslide. The model output shows that the Jiweishan debris flow lasted about 3 min, released a gravitational potential energy of about 6 × 10^13 J with collisions and friction dissipating approximately equal amounts of energy, and had a maximum fragment velocity of 60-70 m/s, almost twice the highest velocity of the overall slide mass (35 m/s). Notable simulated characteristics include the high velocity and energy of the slide material, the preservation of the original positional order of the slide blocks, the inverse vertical grading of blocks, and the downslope sorting of the slide deposits. Field observations that verify these features include uprooted trees in the frontal collision area of the air-blast wave, downslope reduction of average clast size, and undamaged plants atop huge blocks that prove their lack of downslope tumbling. The secondary acceleration effect and force chains derived from the numerical model help explain these deposit features and the long-distance transport. Our back-analyzed frictions of the motion path in the PFC model provide a reference for analyzing and predicting the motion of similar geological hazards.

  11. Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.

    PubMed

    Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H

    2006-11-15

    The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.

  12. Radiative transfer equation accounting for rotational Raman scattering and its solution by the discrete-ordinates method

    NASA Astrophysics Data System (ADS)

    Rozanov, Vladimir V.; Vountas, Marco

    2014-01-01

    Rotational Raman scattering of solar light in Earth's atmosphere leads to the filling-in of Fraunhofer and telluric lines observed in the reflected spectrum. The phenomenological derivation of the inelastic radiative transfer equation including rotational Raman scattering is presented. The different forms of the approximate radiative transfer equation with first-order rotational Raman scattering terms are obtained employing the Cabannes, Rayleigh, and Cabannes-Rayleigh scattering models. The solution of these equations is considered in the framework of the discrete-ordinates method using rigorous and approximate approaches to derive particular integrals. An alternative forward-adjoint technique is suggested as well. A detailed description of the model including the exact spectral matching and a binning scheme that significantly speeds up the calculations is given. The considered solution techniques are implemented in the radiative transfer software package SCIATRAN and a specified benchmark setup is presented to enable readers to compare with own results transparently.

  13. Toward On-Demand Deep Brain Stimulation Using Online Parkinson's Disease Prediction Driven by Dynamic Detection.

    PubMed

    Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas

    2017-12-01

    In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.

  14. Snow Microwave Radiative Transfer (SMRT): A new model framework to simulate snow-microwave interactions for active and passive remote sensing applications

    NASA Astrophysics Data System (ADS)

    Loewe, H.; Picard, G.; Sandells, M. J.; Mätzler, C.; Kontu, A.; Dumont, M.; Maslanka, W.; Morin, S.; Essery, R.; Lemmetyinen, J.; Wiesmann, A.; Floury, N.; Kern, M.

    2016-12-01

    Forward modeling of snow-microwave interactions is widely used to interpret microwave remote sensing data from active and passive sensors. Though different models are yet available for that purpose, a joint effort has been undertaken in the past two years within the ESA Project "Microstructural origin of electromagnetic signatures in microwave remote sensing of snow". The new Snow Microwave Radiative Transfer (SMRT) model primarily facilitates a flexible treatment of snow microstructure as seen by X-ray tomography and seeks to unite respective advantages of existing models. In its main setting, SMRT considers radiation transfer in a plane-parallel snowpack consisting of homogeneous layers with a layer microstructure represented by an autocorrelation function. The electromagnetic model, which underlies permittivity, absorption and scattering calculations within a layer, is based on the improved Born approximation. The resulting vector-radiative transfer equation in the snowpack is solved using spectral decomposition of the discrete ordinates discretization. SMRT is implemented in Python and employs an object-oriented, modular design which intends to i) provide an intuitive and fail-safe API for basic users ii) enable efficient community developments for extensions (e.g. for improvements of sub-models for microstructure, permittivity, soil or interface reflectivity) from advanced users and iii) encapsulate the numerical core which is maintained by the developers. For cross-validation and inter-model comparison, SMRT implements various ingredients of existing models as selectable options (e.g. Rayleigh or DMRT-QCA phase functions) and shallow wrappers to invoke legacy model code directly (MEMLS, DMRT-QMS, HUT). In this paper we give an overview of the model components and show examples and results from different validation schemes.

  15. Discrete-time quantum walk with nitrogen-vacancy centers in diamond coupled to a superconducting flux qubit

    NASA Astrophysics Data System (ADS)

    Hardal, Ali Ü. C.; Xue, Peng; Shikano, Yutaka; Müstecaplıoğlu, Özgür E.; Sanders, Barry C.

    2013-08-01

    We propose a quantum-electrodynamics scheme for implementing the discrete-time, coined quantum walk with the walker corresponding to the phase degree of freedom for a quasimagnon field realized in an ensemble of nitrogen-vacancy centers in diamond. The coin is realized as a superconducting flux qubit. Our scheme improves on an existing proposal for implementing quantum walks in cavity quantum electrodynamics by removing the cumbersome requirement of varying drive-pulse durations according to mean quasiparticle number. Our improvement is relevant to all indirect-coin-flip cavity quantum-electrodynamics realizations of quantum walks. Our numerical analysis shows that this scheme can realize a discrete quantum walk under realistic conditions.

  16. Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.

    PubMed

    Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas

    2017-10-01

    We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.

  17. Electroencephalogram Signal Classification for Automated Epileptic Seizure Detection Using Genetic Algorithm

    PubMed Central

    Nanthini, B. Suguna; Santhi, B.

    2017-01-01

    Background: Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. Materials and Methods: The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. Results: The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Conclusion: Relevant features using GA give better accuracy performance for seizure detection. PMID:28781480

  18. Taboo Search: An Approach to the Multiple Minima Problem

    NASA Astrophysics Data System (ADS)

    Cvijovic, Djurdje; Klinowski, Jacek

    1995-02-01

    Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.

  19. Well-posed and stable transmission problems

    NASA Astrophysics Data System (ADS)

    Nordström, Jan; Linders, Viktor

    2018-07-01

    We introduce the notion of a transmission problem to describe a general class of problems where different dynamics are coupled in time. Well-posedness and stability are analysed for continuous and discrete problems using both strong and weak formulations, and a general transmission condition is obtained. The theory is applied to the coupling of fluid-acoustic models, multi-grid implementations, adaptive mesh refinements, multi-block formulations and numerical filtering.

  20. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

Top