Sample records for computational domain extends

  1. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  2. Extending the limits of complex learning in organic amnesia: computer training in a vocational domain.

    PubMed

    Glisky, E L; Schacter, D L

    1989-01-01

    This study explored the limits of learning that could be achieved by an amnesic patient in a complex real-world domain. Using a cuing procedure known as the method of vanishing cues, a severely amnesic encephalitic patient was taught over 250 discrete pieces of new information concerning the rules and procedures for performing a task involving data entry into a computer. Subsequently, she was able to use this acquired knowledge to perform the task accurately and efficiently in the workplace. These results suggest that amnesic patients' preserved learning abilities can be extended well beyond what has been reported previously.

  3. A Representational Approach to Knowledge and Multiple Skill Levels for Broad Classes of Computer Generated Forces

    DTIC Science & Technology

    1997-12-01

    that I’ll turn my attention to that computer game we’ve talked so much about... Dave Van Veldhuizen and Scott Brown (soon-to-be Drs. Van Veldhuizen and...Industry Training Systems Conference. 1988. 37. Van Veldhuizen , D. A. and L. J Hutson. "A Design Methodology for Domain Inde- pendent Computer...proposed by Van Veld- huizen and Hutson (37), extends the general architecture to support both a domain- independent approach to implementing CGFs and

  4. Time-domain wavefield reconstruction inversion

    NASA Astrophysics Data System (ADS)

    Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan

    2017-12-01

    Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.

  5. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  6. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  7. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  8. Cerebellar contributions to motor control and language comprehension: searching for common computational principles.

    PubMed

    Moberget, Torgeir; Ivry, Richard B

    2016-04-01

    The past 25 years have seen the functional domain of the cerebellum extend beyond the realm of motor control, with considerable discussion of how this subcortical structure contributes to cognitive domains including attention, memory, and language. Drawing on evidence from neuroanatomy, physiology, neuropsychology, and computational work, sophisticated models have been developed to describe cerebellar function in sensorimotor control and learning. In contrast, mechanistic accounts of how the cerebellum contributes to cognition have remained elusive. Inspired by the homogeneous cerebellar microanatomy and a desire for parsimony, many researchers have sought to extend mechanistic ideas from motor control to cognition. One influential hypothesis centers on the idea that the cerebellum implements internal models, representations of the context-specific dynamics of an agent's interactions with the environment, enabling predictive control. We briefly review cerebellar anatomy and physiology, to review the internal model hypothesis as applied in the motor domain, before turning to extensions of these ideas in the linguistic domain, focusing on speech perception and semantic processing. While recent findings are consistent with this computational generalization, they also raise challenging questions regarding the nature of cerebellar learning, and may thus inspire revisions of our views on the role of the cerebellum in sensorimotor control. © 2016 New York Academy of Sciences.

  9. A fictitious domain approach for the Stokes problem based on the extended finite element method

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel; Lozinski, Alexei

    2014-01-01

    In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.

  10. Noise Radiation From a Leading-Edge Slat

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.

    2009-01-01

    This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.

  11. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  12. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  13. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  14. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  15. Finite difference time domain electromagnetic scattering from frequency-dependent lossy materials

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.

    1991-01-01

    Four different FDTD computer codes and companion Radar Cross Section (RCS) conversion codes on magnetic media are submitted. A single three dimensional dispersive FDTD code for both dispersive dielectric and magnetic materials was developed, along with a user's manual. The extension of FDTD to more complicated materials was made. The code is efficient and is capable of modeling interesting radar targets using a modest computer workstation platform. RCS results for two different plate geometries are reported. The FDTD method was also extended to computing far zone time domain results in two dimensions. Also the capability to model nonlinear materials was incorporated into FDTD and validated.

  16. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  17. An immersed-shell method for modelling fluid–structure interactions

    PubMed Central

    Viré, A.; Xiang, J.; Pain, C. C.

    2015-01-01

    The paper presents a novel method for numerically modelling fluid–structure interactions. The method consists of solving the fluid-dynamics equations on an extended domain, where the computational mesh covers both fluid and solid structures. The fluid and solid velocities are relaxed to one another through a penalty force. The latter acts on a thin shell surrounding the solid structures. Additionally, the shell is represented on the extended domain by a non-zero shell-concentration field, which is obtained by conservatively mapping the shell mesh onto the extended mesh. The paper outlines the theory underpinning this novel method, referred to as the immersed-shell approach. It also shows how the coupling between a fluid- and a structural-dynamics solver is achieved. At this stage, results are shown for cases of fundamental interest. PMID:25583857

  18. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  19. Magnetically Defined Qubits on 3D Topological Insulators

    NASA Astrophysics Data System (ADS)

    Ferreira, Gerson J.; Loss, Daniel

    2014-03-01

    We explore potentials that break time-reversal symmetry to confine the surface states of 3D topological insulators into quantum wires and quantum dots. A magnetic domain wall on a ferromagnet insulator cap layer provides interfacial states predicted to show the quantum anomalous Hall effect. Here, we show that confinement can also occur at magnetic domain heterostructures, with states extended in the inner domain, as well as interfacial QAHE states at the surrounding domain walls. The proposed geometry allows the isolation of the wire and dot from spurious circumventing surface states. For the quantum dots, we find that highly spin-polarized quantized QAHE states at the dot edge constitute a promising candidate for quantum computing qubits. See [Ferreira and Loss, Phys. Rev. Lett. 111, 106802 (2013)]. We explore potentials that break time-reversal symmetry to confine the surface states of 3D topological insulators into quantum wires and quantum dots. A magnetic domain wall on a ferromagnet insulator cap layer provides interfacial states predicted to show the quantum anomalous Hall effect. Here, we show that confinement can also occur at magnetic domain heterostructures, with states extended in the inner domain, as well as interfacial QAHE states at the surrounding domain walls. The proposed geometry allows the isolation of the wire and dot from spurious circumventing surface states. For the quantum dots, we find that highly spin-polarized quantized QAHE states at the dot edge constitute a promising candidate for quantum computing qubits. See [Ferreira and Loss, Phys. Rev. Lett. 111, 106802 (2013)]. We acknowledge support from the Swiss NSF, NCCR Nanoscience, NCCR QSIT, and the Brazillian Research Support Center Initiative (NAP Q-NANO) from Pró-Reitoria de Pesquisa (PRP/USP).

  20. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  1. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  2. A fast parallel 3D Poisson solver with longitudinal periodic and transverse open boundary conditions for space-charge simulations

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2017-10-01

    A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.

  3. Phonological universals constrain the processing of nonspeech stimuli.

    PubMed

    Berent, Iris; Balaban, Evan; Lennertz, Tracy; Vaknin-Nusbaum, Vered

    2010-08-01

    Domain-specific systems are hypothetically specialized with respect to the outputs they compute and the inputs they allow (Fodor, 1983). Here, we examine whether these 2 conditions for specialization are dissociable. An initial experiment suggests that English speakers could extend a putatively universal phonological restriction to inputs identified as nonspeech. A subsequent comparison of English and Russian participants indicates that the processing of nonspeech inputs is modulated by linguistic experience. Striking, qualitative differences between English and Russian participants suggest that they rely on linguistic principles, both universal and language-particular, rather than generic auditory processing strategies. Thus, the computation of idiosyncratic linguistic outputs is apparently not restricted to speech inputs. This conclusion presents various challenges to both domain-specific and domain-general accounts of cognition. 2010 APA, all rights reserved

  4. Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Glasser, Alan H.

    2008-11-01

    Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.

  5. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  6. US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1996-09-30

    A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunctionmore » with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.« less

  7. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  8. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  9. Computation of the acoustic radiation force using the finite-difference time-domain method.

    PubMed

    Cai, Feiyan; Meng, Long; Jiang, Chunxiang; Pan, Yu; Zheng, Hairong

    2010-10-01

    The computational details related to calculating the acoustic radiation force on an object using a 2-D grid finite-difference time-domain method (FDTD) are presented. The method is based on propagating the stress and velocity fields through the grid and determining the energy flow with and without the object. The axial and radial acoustic radiation forces predicted by FDTD method are in excellent agreement with the results obtained by analytical evaluation of the scattering method. In particular, the results indicate that it is possible to trap the steel cylinder in the radial direction by optimizing the width of Gaussian source and the operation frequency. As the sizes of the relating objects are smaller than or comparable to wavelength, the algorithm presented here can be easily extended to 3-D and include torque computation algorithms, thus providing a highly flexible and universally usable computation engine.

  10. Finite difference time domain calculation of transients in antennas with nonlinear loads

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent

    1991-01-01

    In this paper transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.

  11. Methods and principles for determining task dependent interface content

    NASA Technical Reports Server (NTRS)

    Shalin, Valerie L.; Geddes, Norman D.; Mikesell, Brian G.

    1992-01-01

    Computer generated information displays provide a promising technology for offsetting the increasing complexity of the National Airspace System. To realize this promise, however, we must extend and adapt the domain-dependent knowledge that informally guides the design of traditional dedicated displays. In our view, the successful exploitation of computer generated displays revolves around the idea of information management, that is, the identification, organization, and presentation of relevant and timely information in a complex task environment. The program of research that is described leads to methods and principles for information management in the domain of commercial aviation. The multi-year objective of the proposed program of research is to develop methods and principles for determining task dependent interface content.

  12. Conical : An extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions

    NASA Astrophysics Data System (ADS)

    Dunster, T. M.; Gil, A.; Segura, J.; Temme, N. M.

    2017-08-01

    Conical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function R-1/2+ iτ m (x) , a real-valued numerically satisfactory companion of the function P-1/2+ iτ m (x) for x > 1. In this way, a natural basis for solving Dirichlet problems bounded by conical domains is provided. The module also improves the performance of our previous algorithm for the conical function P-1/2+ iτ m (x) and it includes now the computation of the first order derivative of the function. This is also considered for the function R-1/2+ iτ m (x) in the extended algorithm.

  13. Computation of forces arising from the polarizable continuum model within the domain-decomposition paradigm

    NASA Astrophysics Data System (ADS)

    Gatto, Paolo; Lipparini, Filippo; Stamm, Benjamin

    2017-12-01

    The domain-decomposition (dd) paradigm, originally introduced for the conductor-like screening model, has been recently extended to the dielectric Polarizable Continuum Model (PCM), resulting in the ddPCM method. We present here a complete derivation of the analytical derivatives of the ddPCM energy with respect to the positions of the solute's atoms and discuss their efficient implementation. As it is the case for the energy, we observe a quadratic scaling, which is discussed and demonstrated with numerical tests.

  14. Autopoiesis + extended cognition + nature = can buildings think?

    PubMed Central

    Dollens, Dennis

    2015-01-01

    To incorporate metabolic, bioremedial functions into the performance of buildings and to balance generative architecture's dominant focus on computational programming and digital fabrication, this text first discusses hybridizing Maturana and Varela's biological theory of autopoiesis with Andy Clark's hypothesis of extended cognition. Doing so establishes a procedural protocol to research biological domains from which design could source data/insight from biosemiotics, sensory plants, and biocomputation. I trace computation and botanic simulations back to Alan Turing's little-known 1950s Morphogenetic drawings, reaction-diffusion algorithms, and pioneering artificial intelligence (AI) in order to establish bioarchitecture's generative point of origin. I ask provocatively, Can buildings think? as a question echoing Turing's own, "Can machines think?" PMID:26478784

  15. Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities

    NASA Astrophysics Data System (ADS)

    Romero, Ignacio; Segurado, Javier; LLorca, Javier

    2008-04-01

    The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix.

  16. Time domain analysis of thin-wire antennas over lossy ground using the reflection-coefficient approximation

    NASA Astrophysics Data System (ADS)

    FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.

    2009-12-01

    This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.

  17. Applying Technology to Inquiry-Based Learning in Early Childhood Education

    ERIC Educational Resources Information Center

    Wang, Feng; Kinzie, Mable B.; McGuire, Patrick; Pan, Edward

    2009-01-01

    Children naturally explore and learn about their environments through inquiry, and computer technologies offer an accessible vehicle for extending the domain and range of this inquiry. Over the past decade, a growing number of interactive games and educational software packages have been implemented in early childhood education and addressed a…

  18. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.

  19. Downstream Effects on Orbiter Leeside Flow Separation for Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.; Pulsonetti, Maria V.; Weilmuenster, K. James

    2005-01-01

    Discrepancies between experiment and computation for shuttle leeside flow separation, which came to light in the Columbia accident investigation, are resolved. Tests were run in the Langley Research Center 20-Inch Hypersonic CF4 Tunnel with a baseline orbiter model and two extended trailing edge models. The extended trailing edges altered the wing leeside separation lines, moving the lines toward the fuselage, proving that wing trailing edge modeling does affect the orbiter leeside flow. Computations were then made with a wake grid. These calculations more closely matched baseline experiments. Thus, the present findings demonstrate that it is imperative to include the wake flow domain in CFD calculations in order to accurately predict leeside flow separation for hypersonic vehicles at high angles of attack.

  20. Leading-edge effects on boundary-layer receptivity

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.; Kerschen, Edward J.

    1990-01-01

    Numerical calculations are presented for the incompressible flow over a parabolic cylinder. The computational domain extends from a region upstream of the body downstream to the region where the Blasius boundary-layer solution holds. A steady mean flow solution is computed and the results for the scaled surface vorticity, surface pressure and displacement thickness are compared to previous studies. The unsteady problem is then formulated as a perturbation solution starting with and evolving from the mean flow. The response to irrotational time harmonic pulsation of the free-stream is examined. Results for the initial development of the velocity profile and displacement thickness are presented. These calculations will be extended to later times to investigate the initiation of instability waves within the boundary-layer.

  1. On the Assessment of Acoustic Scattering and Shielding by Time Domain Boundary Integral Equation Solutions

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.

    2016-01-01

    Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).

  2. PIES free boundary stellarator equilibria with improved initial conditions

    NASA Astrophysics Data System (ADS)

    Drevlak, M.; Monticello, D.; Reiman, A.

    2005-07-01

    The MFBE procedure developed by Strumberger (1997 Nucl. Fusion 37 19) is used to provide an improved starting point for free boundary equilibrium computations in the case of W7-X (Nührenberg and Zille 1986 Phys. Lett. A 114 129) using the Princeton iterative equilibrium solver (PIES) code (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157). Transferring the consistent field found by the variational moments equilibrium code (VMEC) (Hirshmann and Whitson 1983 Phys. Fluids 26 3553) to an extended coordinate system using the VMORPH code, a safe margin between plasma boundary and PIES domain is established. The new EXTENDER_P code implements a generalization of the virtual casing principle, which allows field extension both for VMEC and PIES equilibria. This facilitates analysis of the 5/5 islands of the W7-X standard case without including them in the original PIES computation.

  3. Final Report from The University of Texas at Austin for DEGAS: Dynamic Global Address Space programming environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erez, Mattan; Yelick, Katherine; Sarkar, Vivek

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less

  4. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  5. Proposal of an Extended Taxonomy of Serious Games for Health Rehabilitation.

    PubMed

    Rego, Paula Alexandra; Moreira, Pedro Miguel; Reis, Luís Paulo

    2018-06-29

    Serious Games is a field of research that has evolved substantially with valuable contributions to many application domains and areas. Patients often consider traditional rehabilitation approaches to be repetitive and boring, making it difficult for them to maintain their ongoing interest and assure the completion of the treatment program. Since the publication of our first taxonomy of Serious Games for Health Rehabilitation (SGHR), many studies have been published with game prototypes in this area. Based on literature review, our goal is to propose an updated taxonomy taking into account the works, updates, and innovations in game criteria that have been researched since our first publication in 2010. In addition, we aim to present the validation mechanism used for the proposed extended taxonomy. Based on a literature review in the area and on the analysis of the contributions made by other researchers, we propose an extended taxonomy for SGHR. For validating the taxonomy proposal, a questionnaire was designed to use on a survey among experts in the area. An extended taxonomy for SGHR was proposed. As we have identified that, in general, and besides the mechanisms associated with the adoption of a given taxonomy, there were no reported validation mechanisms for the proposals, we designed a mechanism to validate our proposal. The mechanism uses a questionnaire addressed to a sample of researchers and professionals with experience and expertise in domains of knowledge interrelated with SGHR, such as Computer Graphics, Game Design, Interaction Design, Computer Programming, and Health Rehabilitation. The extended taxonomy proposal for health rehabilitation serious games provides the research community with a tool to fully characterize serious games. The mechanism designed for validating the taxonomy proposal is another contribution of this work.

  6. Finite difference time domain calculation of transients in antennas with nonlinear loads

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent

    1991-01-01

    Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.

  7. Constitutive relations of ferroelectric ceramics

    NASA Astrophysics Data System (ADS)

    Su, Yu

    The objective of this thesis is to obtain a better understanding on the fundamental constitutive behavior of ferroelectric ceramics based on the physics of phase transition, micromechanics of heterogeneous materials, and principles of irreversible thermodynamics. Within this framework, a self-consistent model is developed to investigate the electromechanical responses of ferroelectric polycrystals under temperature change and electromechanical loading. Cooling of a paraelectric crystal below its curie temperature Tc would result in spontaneous polarization, whereas electromechanical loading on a poled crystal could lead to domain switch. Domain growth and reorientation inside ferroelectric crystals are studied in light of these phase transition and domain switch. In this process, the change of the effective elastic, dielectric and piezoelectric constants during the evolution of microstructures are examined. In addition, hysteresis loops for the electric displacement and other related phenomena are computed under cyclic electric load. On top of all methods implemented in this work, the kinetic equation derived from the irreversible thermodynamics is the key to study the domain evolution in ferroelectric crystals. The kinetic relation not only governs the growth of new domain in a ferroelectric crystal, but it also determines the onset of phase transition. This characteristic is used to study the effect of hydrostatic pressure on the shift of Curie temperature of a ferroelectric crystal. Based on the derived expressions, it is observed that the deriving force can increase or decrease upon applied hydrostatic mechanical loading, depending on the change of electromechanical moduli, eigenstrain and electro-polarization. Several typical cases are computed and it is found that the change of the electromechanical moduli during phase transformation plays the key role in the shift of Curie temperature. Since ferroelectric ceramics are in a polycrystal form, a self-consistent model is used to examine the issues involved. In this model, each grain is represented by a spherical inclusion embedded in an infinitely extended piezoelectric matrix, and the inclusion further possesses an eigenstrain and eigen polarization. Secant relations between the polycrystal-matrix and the embedded inclusion are established by extending Hill's [1] incremental relations. An iterative computational program is developed for this self-consistent model.

  8. Exploring dangerous neighborhoods: Latent Semantic Analysis and computing beyond the bounds of the familiar

    PubMed Central

    Cohen, Trevor; Blatter, Brett; Patel, Vimla

    2005-01-01

    Certain applications require computer systems to approximate intended human meaning. This is achievable in constrained domains with a finite number of concepts. Areas such as psychiatry, however, draw on concepts from the world-at-large. A knowledge structure with broad scope is required to comprehend such domains. Latent Semantic Analysis (LSA) is an unsupervised corpus-based statistical method that derives quantitative estimates of the similarity between words and documents from their contextual usage statistics. The aim of this research was to evaluate the ability of LSA to derive meaningful associations between concepts relevant to the assessment of dangerousness in psychiatry. An expert reference model of dangerousness was used to guide the construction of a relevant corpus. Derived associations between words in the corpus were evaluated qualitatively. A similarity-based scoring function was used to assign dangerousness categories to discharge summaries. LSA was shown to derive intuitive relationships between concepts and correlated significantly better than random with human categorization of psychiatric discharge summaries according to dangerousness. The use of LSA to derive a simulated knowledge structure can extend the scope of computer systems beyond the boundaries of constrained conceptual domains. PMID:16779020

  9. A compressive sensing based secure watermark detection and privacy preserving storage framework.

    PubMed

    Qia Wang; Wenjun Zeng; Jun Tian

    2014-03-01

    Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service, such as the cloud. In this paper, we identify a cloud computing application scenario that requires simultaneously performing secure watermark detection and privacy preserving multimedia data storage. We then propose a compressive sensing (CS)-based framework using secure multiparty computation (MPC) protocols to address such a requirement. In our framework, the multimedia data and secret watermark pattern are presented to the cloud for secure watermark detection in a CS domain to protect the privacy. During CS transformation, the privacy of the CS matrix and the watermark pattern is protected by the MPC protocols under the semi-honest security model. We derive the expected watermark detection performance in the CS domain, given the target image, watermark pattern, and the size of the CS matrix (but without the CS matrix itself). The correctness of the derived performance has been validated by our experiments. Our theoretical analysis and experimental results show that secure watermark detection in the CS domain is feasible. Our framework can also be extended to other collaborative secure signal processing and data-mining applications in the cloud.

  10. Survey of non-linear hydrodynamic models of type-II Cepheids

    NASA Astrophysics Data System (ADS)

    Smolec, R.

    2016-03-01

    We present a grid of non-linear convective type-II Cepheid models. The dense model grids are computed for 0.6 M⊙ and a range of metallicities ([Fe/H] = -2.0, -1.5, -1.0), and for 0.8 M⊙ ([Fe/H] = -1.5). Two sets of convective parameters are considered. The models cover the full temperature extent of the classical instability strip, but are limited in luminosity; for the most luminous models, violent pulsation leads to the decoupling of the outermost model shell. Hence, our survey reaches only the shortest period RV Tau domain. In the Hertzsprung-Russell diagram, we detect two domains in which period-doubled pulsation is possible. The first extends through the BL Her domain and low-luminosity W Vir domain (pulsation periods ˜2-6.5 d). The second domain extends at higher luminosities (W Vir domain; periods >9.5 d). Some models within these domains display period-4 pulsation. We also detect very narrow domains (˜10 K wide) in which modulation of pulsation is possible. Another interesting phenomenon we detect is double-mode pulsation in the fundamental mode and in the fourth radial overtone. Fourth overtone is a surface mode, trapped in the outer model layers. Single-mode pulsation in the fourth overtone is also possible on the hot side of the classical instability strip. The origin of the above phenomena is discussed. In particular, the role of resonances in driving different pulsation dynamics as well as in shaping the morphology of the radius variation curves is analysed.

  11. Extended Kalman filtering for the detection of damage in linear mechanical structures

    NASA Astrophysics Data System (ADS)

    Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.

    2009-09-01

    This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.

  12. Extending a CAD-Based Cartesian Mesh Generator for the Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, J Nathan; Inclan, Eric J; Joshi, Abhijit S

    2012-01-01

    This paper describes the development of a custom preprocessor for the PaRAllel Thermal Hydraulics simulations using Advanced Mesoscopic methods (PRATHAM) code based on an open-source mesh generator, CartGen [1]. PRATHAM is a three-dimensional (3D) lattice Boltzmann method (LBM) based parallel flow simulation software currently under development at the Oak Ridge National Laboratory. The LBM algorithm in PRATHAM requires a uniform, coordinate system-aligned, non-body-fitted structured mesh for its computational domain. CartGen [1], which is a GNU-licensed open source code, already comes with some of the above needed functionalities. However, it needs to be further extended to fully support the LBM specificmore » preprocessing requirements. Therefore, CartGen is being modified to (i) be compiler independent while converting a neutral-format STL (Stereolithography) CAD geometry to a uniform structured Cartesian mesh, (ii) provide a mechanism for PRATHAM to import the mesh and identify the fluid/solid domains, and (iii) provide a mechanism to visually identify and tag the domain boundaries on which to apply different boundary conditions.« less

  13. A vectorization of the Jameson-Caughey NYU transonic swept-wing computer program FLO-22-V1 for the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.

    1978-01-01

    The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.

  14. Calculation of recirculating flow behind flame-holders

    NASA Astrophysics Data System (ADS)

    Zeng, Q.; Sheng, Y.; Zhou, Q.

    1985-10-01

    Adoptability of standard K-epsilon turbulence model for numerical calculation of recirculating flow is discussed. Many computations of recirculating flows behind bluff-bodies used as flame-holders in afterburner of aeroengine have been completed. Blocking-off method to treat the incline walls of the flame-holder gives good results. In isothermal recirculating flows the flame-holder wall is assumed to be isolated. Therefore, it is possible to remove the inactive zone from the calculation domain in programming to save computer time. The computation for a V-shaped flame-holder exhibits an interesting phenomenon that the recirculation zone extends to the cavity of the flame-holder.

  15. Frequency-domain-independent vector analysis for mode-division multiplexed transmission

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Hu, Guijun; Li, Jiao

    2018-04-01

    In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.

  16. Augmenting the senses: a review on sensor-based learning support.

    PubMed

    Schneider, Jan; Börner, Dirk; van Rosmalen, Peter; Specht, Marcus

    2015-02-11

    In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based prototypes exploring their learning support. To study this learning support we classified the prototypes according to the Bloom's taxonomy of learning domains and explored how they can be used to assist on the implementation of formative assessment, paying special attention to their use as feedback tools. The analysis leads to current research foci and gaps in the development of sensor-based learning support systems and concludes with a research agenda based on the findings.

  17. Augmenting the Senses: A Review on Sensor-Based Learning Support

    PubMed Central

    Schneider, Jan; Börner, Dirk; van Rosmalen, Peter; Specht, Marcus

    2015-01-01

    In recent years sensor components have been extending classical computer-based support systems in a variety of applications domains (sports, health, etc.). In this article we review the use of sensors for the application domain of learning. For that we analyzed 82 sensor-based prototypes exploring their learning support. To study this learning support we classified the prototypes according to the Bloom's taxonomy of learning domains and explored how they can be used to assist on the implementation of formative assessment, paying special attention to their use as feedback tools. The analysis leads to current research foci and gaps in the development of sensor-based learning support systems and concludes with a research agenda based on the findings. PMID:25679313

  18. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  19. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  20. External Boundary Conditions for Three-Dimensional Problems of Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1997-01-01

    We consider an unbounded steady-state flow of viscous fluid over a three-dimensional finite body or configuration of bodies. For the purpose of solving this flow problem numerically, we discretize the governing equations (Navier-Stokes) on a finite-difference grid. The grid obviously cannot stretch from the body up to infinity, because the number of the discrete variables in that case would not be finite. Therefore, prior to the discretization we truncate the original unbounded flow domain by introducing some artificial computational boundary at a finite distance of the body. Typically, the artificial boundary is introduced in a natural way as the external boundary of the domain covered by the grid. The flow problem formulated only on the finite computational domain rather than on the original infinite domain is clearly subdefinite unless some artificial boundary conditions (ABC's) are specified at the external computational boundary. Similarly, the discretized flow problem is subdefinite (i.e., lacks equations with respect to unknowns) unless a special closing procedure is implemented at this artificial boundary. The closing procedure in the discrete case is called the ABC's as well. In this paper, we present an innovative approach to constructing highly accurate ABC's for three-dimensional flow computations. The approach extends our previous technique developed for the two-dimensional case; it employs the finite-difference counterparts to Calderon's pseudodifferential boundary projections calculated in the framework of the difference potentials method (DPM) by Ryaben'kii. The resulting ABC's appear spatially nonlocal but particularly easy to implement along with the existing solvers. The new boundary conditions have been successfully combined with the NASA-developed production code TLNS3D and used for the analysis of wing-shaped configurations in subsonic (including incompressible limit) and transonic flow regimes. As demonstrated by the computational experiments and comparisons with the standard (local) methods, the DPM-based ABC's allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable increase of the convergence rate of multigrid iterations.

  1. A Systematic Approach for Obtaining Performance on Matrix-Like Operations

    NASA Astrophysics Data System (ADS)

    Veras, Richard Michael

    Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.

  2. Vibration of carbon nanotubes with defects: order reduction methods

    NASA Astrophysics Data System (ADS)

    Hudson, Robert B.; Sinha, Alok

    2018-03-01

    Order reduction methods are widely used to reduce computational effort when calculating the impact of defects on the vibrational properties of nearly periodic structures in engineering applications, such as a gas-turbine bladed disc. However, despite obvious similarities these techniques have not yet been adapted for use in analysing atomic structures with inevitable defects. Two order reduction techniques, modal domain analysis and modified modal domain analysis, are successfully used in this paper to examine the changes in vibrational frequencies, mode shapes and mode localization caused by defects in carbon nanotubes. The defects considered are isotope defects and Stone-Wales defects, though the methods described can be extended to other defects.

  3. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  4. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  5. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  6. Prediction of heterotrimeric protein complexes by two-phase learning using neighboring kernels

    PubMed Central

    2014-01-01

    Background Protein complexes play important roles in biological systems such as gene regulatory networks and metabolic pathways. Most methods for predicting protein complexes try to find protein complexes with size more than three. It, however, is known that protein complexes with smaller sizes occupy a large part of whole complexes for several species. In our previous work, we developed a method with several feature space mappings and the domain composition kernel for prediction of heterodimeric protein complexes, which outperforms existing methods. Results We propose methods for prediction of heterotrimeric protein complexes by extending techniques in the previous work on the basis of the idea that most heterotrimeric protein complexes are not likely to share the same protein with each other. We make use of the discriminant function in support vector machines (SVMs), and design novel feature space mappings for the second phase. As the second classifier, we examine SVMs and relevance vector machines (RVMs). We perform 10-fold cross-validation computational experiments. The results suggest that our proposed two-phase methods and SVM with the extended features outperform the existing method NWE, which was reported to outperform other existing methods such as MCL, MCODE, DPClus, CMC, COACH, RRW, and PPSampler for prediction of heterotrimeric protein complexes. Conclusions We propose two-phase prediction methods with the extended features, the domain composition kernel, SVMs and RVMs. The two-phase method with the extended features and the domain composition kernel using SVM as the second classifier is particularly useful for prediction of heterotrimeric protein complexes. PMID:24564744

  7. Effects of head geometry simplifications on acoustic radiation of vowel sounds based on time-domain finite-element simulations.

    PubMed

    Arnela, Marc; Guasch, Oriol; Alías, Francesc

    2013-10-01

    One of the key effects to model in voice production is that of acoustic radiation of sound waves emanating from the mouth. The use of three-dimensional numerical simulations allows to naturally account for it, as well as to consider all geometrical head details, by extending the computational domain out of the vocal tract. Despite this advantage, many approximations to the head geometry are often performed for simplicity and impedance load models are still used as well to reduce the computational cost. In this work, the impact of some of these simplifications on radiation effects is examined for vowel production in the frequency range 0-10 kHz, by means of comparison with radiation from a realistic head. As a result, recommendations are given on their validity depending on whether high frequency energy (above 5 kHz) should be taken into account or not.

  8. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  9. Electron number probability distributions for correlated wave functions.

    PubMed

    Francisco, E; Martín Pendás, A; Blanco, M A

    2007-03-07

    Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.

  10. Control of Cellular Structural Networks Through Unstructured Protein Domains

    DTIC Science & Technology

    2016-07-01

    stem cells (hPSCs), including embryonic and induced pluripotent stem cells . We had a third paper accepted to Scientific Reports in which we showed...2012 Stem Cells Young Investigator Award. We then had a followup paper accepted to Integrative Biology extending these ideas to human pluripotent ...morphology, mechanics, and neurogenesis in neural stem cells ; (3) To develop and use multiscale computational 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND

  11. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    PubMed

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  12. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  13. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  14. Extending a Flight Management Computer for Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.; Sugden, Paul C.

    2005-01-01

    In modern transport aircraft, the flight management computer (FMC) has evolved from a flight planning aid to an important hub for pilot information and origin-to-destination optimization of flight performance. Current trends indicate increasing roles of the FMC in aviation safety, aviation security, increasing airport capacity, and improving environmental impact from aircraft. Related research conducted at the Langley Research Center (LaRC) often requires functional extension of a modern, full-featured FMC. Ideally, transport simulations would include an FMC simulation that could be tailored and extended for experiments. However, due to the complexity of a modern FMC, a large investment (millions of dollars over several years) and scarce domain knowledge are needed to create such a simulation for transport aircraft. As an intermediate alternative, the Flight Research Services Directorate (FRSD) at LaRC created a set of reusable software products to extend flight management functionality upstream of a Boeing-757 FMC, transparently simulating or sharing its operator interfaces. The paper details the design of these products and highlights their use on NASA projects.

  15. Micromorphic approach for gradient-extended thermo-elastic-plastic solids in the logarithmic strain space

    NASA Astrophysics Data System (ADS)

    Aldakheel, Fadi

    2017-11-01

    The coupled thermo-mechanical strain gradient plasticity theory that accounts for microstructure-based size effects is outlined within this work. It extends the recent work of Miehe et al. (Comput Methods Appl Mech Eng 268:704-734, 2014) to account for thermal effects at finite strains. From the computational viewpoint, the finite element design of the coupled problem is not straightforward and requires additional strategies due to the difficulties near the elastic-plastic boundaries. To simplify the finite element formulation, we extend it toward the micromorphic approach to gradient thermo-plasticity model in the logarithmic strain space. The key point is the introduction of dual local-global field variables via a penalty method, where only the global fields are restricted by boundary conditions. Hence, the problem of restricting the gradient variable to the plastic domain is relaxed, which makes the formulation very attractive for finite element implementation as discussed in Forest (J Eng Mech 135:117-131, 2009) and Miehe et al. (Philos Trans R Soc A Math Phys Eng Sci 374:20150170, 2016).

  16. Molecular Mechanics of the α-Actinin Rod Domain: Bending, Torsional, and Extensional Behavior

    PubMed Central

    Golji, Javad; Collins, Robert; Mofrad, Mohammad R. K.

    2009-01-01

    α-Actinin is an actin crosslinking molecule that can serve as a scaffold and maintain dynamic actin filament networks. As a crosslinker in the stressed cytoskeleton, α-actinin can retain conformation, function, and strength. α-Actinin has an actin binding domain and a calmodulin homology domain separated by a long rod domain. Using molecular dynamics and normal mode analysis, we suggest that the α-actinin rod domain has flexible terminal regions which can twist and extend under mechanical stress, yet has a highly rigid interior region stabilized by aromatic packing within each spectrin repeat, by electrostatic interactions between the spectrin repeats, and by strong salt bridges between its two anti-parallel monomers. By exploring the natural vibrations of the α-actinin rod domain and by conducting bending molecular dynamics simulations we also predict that bending of the rod domain is possible with minimal force. We introduce computational methods for analyzing the torsional strain of molecules using rotating constraints. Molecular dynamics extension of the α-actinin rod is also performed, demonstrating transduction of the unfolding forces across salt bridges to the associated monomer of the α-actinin rod domain. PMID:19436721

  17. Computational modelling of an operational wind turbine and validation with LIDAR

    NASA Astrophysics Data System (ADS)

    Creech, Angus; Fruh, Wolf-Gerrit; Clive, Peter

    2010-05-01

    We present a computationally efficient method to model the interaction of wind turbines with the surrounding flow, where the interaction provides information on the power generation of the turbine and the generated wake behind the turbine. The turbine representation is based on the principle of an actuator volume, whereby the energy extraction and balancing forces on the fluids are formulated as body forces which avoids the extremely high computational costs of boundary conditions and forces. Depending on the turbine information available, those forces can be derived either from published turbine performance specifications or from their rotor and blade design. This turbine representation is then coupled to a Computational Fluid Dynamics package, in this case the hr-adaptive Finite-Element code Fluidity from Imperial College, London. Here we present a simulation of an operational 950kW NEG Micon NM54 wind turbine installed in the west of Scotland. The calculated wind is compared with LIDAR measurements using a Galion LIDAR from SgurrEnergy. The computational domain extends over an area of 6km by 6km and a height of 750m, centred on the turbine. The lower boundary includes the orography of the terrain and surface roughness values representing the vegetation - some forested areas and some grassland. The boundary conditions on the sides are relaxed Dirichlet conditions, relaxed to an observed prevailing wind speed and direction. Within instrumental errors and model limitations, the overall flow field in general and the wake behind the turbine in particular, show a very high degree of agreement, demonstrating the validity and value of this approach. The computational costs of this approach are such that it is possible to extend this single-turbine example to a full wind farm, as the number of required mesh nodes is given by the domain and then increases only linearly with the number of turbines

  18. An Improved Treatment of External Boundary for Three-Dimensional Flow Computations

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.; Vatsa, Veer N.

    1997-01-01

    We present an innovative numerical approach for setting highly accurate nonlocal boundary conditions at the external computational boundaries when calculating three-dimensional compressible viscous flows over finite bodies. The approach is based on application of the difference potentials method by V. S. Ryaben'kii and extends our previous technique developed for the two-dimensional case. The new boundary conditions methodology has been successfully combined with the NASA-developed code TLNS3D and used for the analysis of wing-shaped configurations in subsonic and transonic flow regimes. As demonstrated by the computational experiments, the improved external boundary conditions allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable speedup of convergence of the multigrid iterations.

  19. Hall-Effect Thruster Simulations with 2-D Electron Transport and Hydrodynamic Ions

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Katz, Ira; Hofer, Richard H.; Goebel, Dan M.

    2009-01-01

    A computational approach that has been used extensively in the last two decades for Hall thruster simulations is to solve a diffusion equation and energy conservation law for the electrons in a direction that is perpendicular to the magnetic field, and use discrete-particle methods for the heavy species. This "hybrid" approach has allowed for the capture of bulk plasma phenomena inside these thrusters within reasonable computational times. Regions of the thruster with complex magnetic field arrangements (such as those near eroded walls and magnets) and/or reduced Hall parameter (such as those near the anode and the cathode plume) challenge the validity of the quasi-one-dimensional assumption for the electrons. This paper reports on the development of a computer code that solves numerically the 2-D axisymmetric vector form of Ohm's law, with no assumptions regarding the rate of electron transport in the parallel and perpendicular directions. The numerical challenges related to the large disparity of the transport coefficients in the two directions are met by solving the equations in a computational mesh that is aligned with the magnetic field. The fully-2D approach allows for a large physical domain that extends more than five times the thruster channel length in the axial direction, and encompasses the cathode boundary. Ions are treated as an isothermal, cold (relative to the electrons) fluid, accounting for charge-exchange and multiple-ionization collisions in the momentum equations. A first series of simulations of two Hall thrusters, namely the BPT-4000 and a 6-kW laboratory thruster, quantifies the significance of ion diffusion in the anode region and the importance of the extended physical domain on studies related to the impact of the transport coefficients on the electron flow field.

  20. Minimum-domain impulse theory for unsteady aerodynamic force

    NASA Astrophysics Data System (ADS)

    Kang, L. L.; Liu, L. Q.; Su, W. D.; Wu, J. Z.

    2018-01-01

    We extend the impulse theory for unsteady aerodynamics from its classic global form to finite-domain formulation then to minimum-domain form and from incompressible to compressible flows. For incompressible flow, the minimum-domain impulse theory raises the finding of Li and Lu ["Force and power of flapping plates in a fluid," J. Fluid Mech. 712, 598-613 (2012)] to a theorem: The entire force with discrete wake is completely determined by only the time rate of impulse of those vortical structures still connecting to the body, along with the Lamb-vector integral thereof that captures the contribution of all the rest disconnected vortical structures. For compressible flows, we find that the global form in terms of the curl of momentum ∇ × (ρu), obtained by Huang [Unsteady Vortical Aerodynamics (Shanghai Jiaotong University Press, 1994)], can be generalized to having an arbitrary finite domain, but the formula is cumbersome and in general ∇ × (ρu) no longer has discrete structures and hence no minimum-domain theory exists. Nevertheless, as the measure of transverse process only, the unsteady field of vorticity ω or ρω may still have a discrete wake. This leads to a minimum-domain compressible vorticity-moment theory in terms of ρω (but it is beyond the classic concept of impulse). These new findings and applications have been confirmed by our numerical experiments. The results not only open an avenue to combine the theory with computation-experiment in wide applications but also reveal a physical truth that it is no longer necessary to account for all wake vortical structures in computing the force and moment.

  1. The role of internal duplication in the evolution of multi-domain proteins.

    PubMed

    Nacher, J C; Hayashida, M; Akutsu, T

    2010-08-01

    Many proteins consist of several structural domains. These multi-domain proteins have likely been generated by selective genome growth dynamics during evolution to perform new functions as well as to create structures that fold on a biologically feasible time scale. Domain units frequently evolved through a variety of genetic shuffling mechanisms. Here we examine the protein domain statistics of more than 1000 organisms including eukaryotic, archaeal and bacterial species. The analysis extends earlier findings on asymmetric statistical laws for proteome to a wider variety of species. While proteins are composed of a wide range of domains, displaying a power-law decay, the computation of domain families for each protein reveals an exponential distribution, characterizing a protein universe composed of a thin number of unique families. Structural studies in proteomics have shown that domain repeats, or internal duplicated domains, represent a small but significant fraction of genome. In spite of its importance, this observation has been largely overlooked until recently. We model the evolutionary dynamics of proteome and demonstrate that these distinct distributions are in fact rooted in an internal duplication mechanism. This process generates the contemporary protein structural domain universe, determines its reduced thickness, and tames its growth. These findings have important implications, ranging from protein interaction network modeling to evolutionary studies based on fundamental mechanisms governing genome expansion.

  2. Frequency-domain algorithm for the Lorenz-gauge gravitational self-force

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp; Warburton, Niels; Barack, Leor

    2013-11-01

    State-of-the-art computations of the gravitational self-force (GSF) on massive particles in black hole spacetimes involve numerical evolution of the metric perturbation equations in the time domain, which is computationally very costly. We present here a new strategy based on a frequency-domain treatment of the perturbation equations, which offers considerable computational saving. The essential ingredients of our method are (i) a Fourier-harmonic decomposition of the Lorenz-gauge metric perturbation equations and a numerical solution of the resulting coupled set of ordinary equations with suitable boundary conditions; (ii) a generalized version of the method of extended homogeneous solutions [L. Barack, A. Ori, and N. Sago, Phys. Rev. D 78, 084021 (2008)] used to circumvent the Gibbs phenomenon that would otherwise hamper the convergence of the Fourier mode sum at the particle’s location; (iii) standard mode-sum regularization, which finally yields the physical GSF as a sum over regularized modal contributions. We present a working code that implements this strategy to calculate the Lorenz-gauge GSF along eccentric geodesic orbits around a Schwarzschild black hole. The code is far more efficient than existing time-domain methods; the gain in computation speed (at a given precision) is about an order of magnitude at an eccentricity of 0.2, and up to 3 orders of magnitude for circular or nearly circular orbits. This increased efficiency was crucial in enabling the recently reported calculation of the long-term orbital evolution of an extreme mass ratio inspiral [N. Warburton, S. Akcay, L. Barack, J. R. Gair, and N. Sago, Phys. Rev. D 85, 061501(R) (2012)]. Here we provide full technical details of our method to complement the above report.

  3. Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics

    NASA Technical Reports Server (NTRS)

    Roe, P. L.

    1984-01-01

    A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.

  4. Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)

    2000-01-01

    In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.

  5. Investigation of Response Amplitude Operators for Floating Offshore Wind Turbines: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramachandran, G. K. V.; Robertson, A.; Jonkman, J. M.

    This paper examines the consistency between response amplitude operators (RAOs) computed from WAMIT, a linear frequency-domain tool, to RAOs derived from time-domain computations based on white-noise wave excitation using FAST, a nonlinear aero-hydro-servo-elastic tool. The RAO comparison is first made for a rigid floating wind turbine without wind excitation. The investigation is further extended to examine how these RAOs change for a flexible and operational wind turbine. The RAOs are computed for below-rated, rated, and above-rated wind conditions. The method is applied to a floating wind system composed of the OC3-Hywind spar buoy and NREL 5-MW wind turbine. The responsesmore » are compared between FAST and WAMIT to verify the FAST model and to understand the influence of structural flexibility, aerodynamic damping, control actions, and waves on the system responses. The results show that based on the RAO computation procedure implemented, the WAMIT- and FAST-computed RAOs are similar (as expected) for a rigid turbine subjected to waves only. However, WAMIT is unable to model the excitation from a flexible turbine. Further, the presence of aerodynamic damping decreased the platform surge and pitch responses, as computed by both WAMIT and FAST when wind was included. Additionally, the influence of gyroscopic excitation increased the yaw response, which was captured by both WAMIT and FAST.« less

  6. Morphological characterization of rat entorhinal neurons in vivo: soma-dendritic structure and axonal domains.

    PubMed

    Lingenhöhl, K; Finch, D M

    1991-01-01

    We used in vivo intracellular labeling with horseradish peroxidase in order to study the soma-dendritic morphology and axonal projections of rat entorhinal neurons. The cells responded to hippocampal stimulation with inhibitory postsynaptic potentials, and thus likely received direct or indirect hippocampal input. All cells (n = 24) showed extensive dendritic domains that extended in some cases for more than 1 mm. The dendrites of layer II neurons were largely restricted to layers I and II or layers I-III, while the dendrites of deeper cells could extend through all cortical layers. Computed 3D rotations showed that the basilar dendrites of deep pyramids extended roughly parallel to the cortical layering, and that they were mostly confined to the layer containing the soma and layers immediately adjacent. Total dendritic lengths averaged 9.8 mm +/- 3.8 (SD), and ranged from 5 mm to more than 18 mm. Axonal processes could be visualized in 21 cells. Most of these showed axonal branching within the entorhinal cortex, sometimes extensive. Efferent axonal domains were reconstructed in detail in 3 layer II stellate cells. All 3 projected axons across the subicular complex to the dentate gyrus. One of these cells showed an extensive net-like axonal domain that also projected to several other structures, including the hippocampus proper, subicular complex, and the amygdalo-piriform transition area. The axons of layer III and IV cells projected to the angular bundle, where they continued in a rostral direction. In contrast to the layer II, III and IV cells, no efferent axonal branches leaving the entorhinal cortex could be visualized in 5 layer V neurons. The data indicate that entorhinal neurons can integrate input from a considerable volume of entorhinal cortex by virtue of their extensive dendritic domains, and provide a further basis for specifying the layers in which cells receive synaptic input. The extensive axonal branching pattern seen in most of the cells would support divergent propagation of their activity.

  7. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  8. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  9. Exact extreme-value statistics at mixed-order transitions.

    PubMed

    Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David

    2016-05-01

    We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.

  10. U.S. Army weapon systems human-computer interface style guide. Version 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1997-12-31

    A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4),more » in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.« less

  11. A new method for solving reachable domain of spacecraft with a single impulse

    NASA Astrophysics Data System (ADS)

    Chen, Qi; Qiao, Dong; Shang, Haibin; Liu, Xinfu

    2018-04-01

    This paper develops a new approach to solve the reachable domain of a spacecraft with a single maximum available impulse. First, the distance in a chosen direction, started from a given position on the initial orbit, is formulated. Then, its extreme value is solved to obtain the maximum reachable distance in this direction. The envelop of the reachable domain in three-dimensional space is determined by solving the maximum reachable distance in all directions. Four scenarios are analyzed, including three typical scenarios (either the maneuver position or impulse direction is fixed, or both are arbitrary) and a new extended scenario (the maneuver position is restricted to an interval and the impulse direction is arbitrary). Moreover, the symmetry and the boundedness of the reachable domain are discussed in detail. The former is helpful to reduce the numerical computation, while the latter decides the maximum eccentricity of the initial orbit for a maximum available impulse. The numerical simulations verify the effectiveness of the proposed method for solving the reachable domain in all four scenarios. Especially, the reachable domain with a highly elliptical initial orbit can be determined successfully, which remains unsolved in the existing papers.

  12. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  13. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  14. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  15. Numerical Investigations of Two Typical Unsteady Flows in Turbomachinery Using the Multi-Passage Model

    NASA Astrophysics Data System (ADS)

    Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan

    2016-06-01

    In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.

  16. An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.

    2014-01-01

    Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.

  17. Computational domain discretization in numerical analysis of flow within granular materials

    NASA Astrophysics Data System (ADS)

    Sosnowski, Marcin

    2018-06-01

    The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.

  18. The potential benefits of photonics in the computing platform

    NASA Astrophysics Data System (ADS)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  19. High Performance Computing Technologies for Modeling the Dynamics and Dispersion of Ice Chunks in the Arctic Ocean

    DTIC Science & Technology

    2016-08-23

    SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume

  20. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...

  1. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...

  2. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...

  3. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... public domain computer software. (a) General. This section prescribes the procedures for submission of legal documents pertaining to computer shareware and the deposit of public domain computer software...

  4. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  5. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    NASA Astrophysics Data System (ADS)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  6. Homogenization in micro-magneto-mechanics

    NASA Astrophysics Data System (ADS)

    Sridhar, A.; Keip, M.-A.; Miehe, C.

    2016-07-01

    Ferromagnetic materials are characterized by a heterogeneous micro-structure that can be altered by external magnetic and mechanical stimuli. The understanding and the description of the micro-structure evolution is of particular importance for the design and the analysis of smart materials with magneto-mechanical coupling. The macroscopic response of the material results from complex magneto-mechanical interactions occurring on smaller length scales, which are driven by magnetization reorientation and associated magnetic domain wall motions. The aim of this work is to directly base the description of the macroscopic magneto-mechanical material behavior on the micro-magnetic domain evolution. This will be realized by the incorporation of a ferromagnetic phase-field formulation into a macroscopic Boltzmann continuum by the use of computational homogenization. The transition conditions between the two scales are obtained via rigorous exploitation of rate-type and incremental variational principles, which incorporate an extended version of the classical Hill-Mandel macro-homogeneity condition covering the phase field on the micro-scale. An efficient two-scale computational scenario is developed based on an operator splitting scheme that includes a predictor for the magnetization on the micro-scale. Two- and three-dimensional numerical simulations demonstrate the performance of the method. They investigate micro-magnetic domain evolution driven by macroscopic fields as well as the associated overall hysteretic response of ferromagnetic solids.

  7. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  8. TOPDOM: database of conservatively located domains and motifs in proteins.

    PubMed

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. Patterning in systems driven by nonlocal external forces.

    PubMed

    Luneville, L; Mallick, K; Pontikis, V; Simeone, D

    2016-11-01

    This work focuses on systems displaying domain patterns resulting from competing external and internal dynamics. To this end, we introduce a Lyapunov functional capable of describing the steady states of systems subject to external forces, by adding nonlocal terms to the Landau Ginzburg free energy of the system. Thereby, we extend the existing methodology treating long-range order interactions, to the case of external nonlocal forces. By studying the quadratic term of this Lyapunov functional, we compute the phase diagram in the temperature versus external field and we determine all possible modulated phases (domain patterns) as a function of the external forces and the temperature. Finally, we investigate patterning in chemical reactive mixtures and binary mixtures under irradiation, and we show that the last case opens the path toward micro-structural engineering of materials.

  10. Visualizing Spatially Varying Distribution Data

    NASA Technical Reports Server (NTRS)

    Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubos, A.K.; Caseiras, C.P.; Buchlin, J.M.

    The transient two-phase flow and phase change heat transfer processes in porous media are investigated. Based on an enthalpic approach, a one-domain formulation of the problem is developed, avoiding explicit internal boundary tracking between single- and two-phase regions. An efficient numerical scheme is applied to obtain the solution on a fixed two-dimensional grid. The transient response of a liquid-saturated, self-heated porous bed is examined in detail. A physical interpretation of a liquid-saturated, self-heated porous bed is examined in detail. A physical interpretation of the computed response to fast power transients is attempted. Comparisons with experimental data are made regarding themore » average void fraction and the limiting dryout heat flux. The numerical approach is extended, keeping the one-domain formulation, to include the surrounding wall structure in the calculation.« less

  12. High-Order Methods for Incompressible Fluid Flow

    NASA Astrophysics Data System (ADS)

    Deville, M. O.; Fischer, P. F.; Mund, E. H.

    2002-08-01

    High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.

  13. Coupling MHD and PIC models in 2 dimensions

    NASA Astrophysics Data System (ADS)

    Daldorff, L.; Toth, G.; Sokolov, I.; Gombosi, T. I.; Lapenta, G.; Brackbill, J. U.; Markidis, S.; Amaya, J.

    2013-12-01

    Even for extended fluid plasma models, like Hall, anisotropic ion pressure and multi fluid MHD, there are still many plasma phenomena that are not well captured. For this reason, we have coupled the Implicit Particle-In-Cell (iPIC3D) code with the BATSRUS global MHD code. The PIC solver is applied in a part of the computational domain, for example, in the vicinity of reconnection sites, and overwrites the MHD solution. On the other hand, the fluid solver provides the boundary conditions for the PIC code. To demonstrate the use of the coupled codes for magnetospheric applications, we perform a 2D magnetosphere simulation, where BATSRUS solves for Hall MHD in the whole domain except for the tail reconnection region, which is handled by iPIC3D.

  14. Patterning in systems driven by nonlocal external forces

    NASA Astrophysics Data System (ADS)

    Luneville, L.; Mallick, K.; Pontikis, V.; Simeone, D.

    2016-11-01

    This work focuses on systems displaying domain patterns resulting from competing external and internal dynamics. To this end, we introduce a Lyapunov functional capable of describing the steady states of systems subject to external forces, by adding nonlocal terms to the Landau Ginzburg free energy of the system. Thereby, we extend the existing methodology treating long-range order interactions, to the case of external nonlocal forces. By studying the quadratic term of this Lyapunov functional, we compute the phase diagram in the temperature versus external field and we determine all possible modulated phases (domain patterns) as a function of the external forces and the temperature. Finally, we investigate patterning in chemical reactive mixtures and binary mixtures under irradiation, and we show that the last case opens the path toward micro-structural engineering of materials.

  15. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  16. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  17. Crosswords to computers: a critical review of popular approaches to cognitive enhancement.

    PubMed

    Jak, Amy J; Seelye, Adriana M; Jurick, Sarah M

    2013-03-01

    Cognitive enhancement strategies have gained recent popularity and have the potential to benefit clinical and non-clinical populations. As technology advances and the number of cognitively healthy adults seeking methods of improving or preserving cognitive functioning grows, the role of electronic (e.g., computer and video game based) cognitive training becomes more relevant and warrants greater scientific scrutiny. This paper serves as a critical review of empirical evaluations of publically available electronic cognitive training programs. Many studies have found that electronic training approaches result in significant improvements in trained cognitive tasks. Fewer studies have demonstrated improvements in untrained tasks within the trained cognitive domain, non-trained cognitive domains, or on measures of everyday function. Successful cognitive training programs will elicit effects that generalize to untrained, practical tasks for extended periods of time. Unfortunately, many studies of electronic cognitive training programs are hindered by methodological limitations such as lack of an adequate control group, long-term follow-up and ecologically valid outcome measures. Despite these limitations, evidence suggests that computerized cognitive training has the potential to positively impact one's sense of social connectivity and self-efficacy.

  18. A four stage approach for ontology-based health information system design.

    PubMed

    Kuziemsky, Craig E; Lau, Francis

    2010-11-01

    To describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS). A hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool. An empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing. The four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  20. Estimation of excitation forces for wave energy converters control using pressure measurements

    NASA Astrophysics Data System (ADS)

    Abdelkhalik, O.; Zou, S.; Robinett, R.; Bacelli, G.; Wilson, D.

    2017-08-01

    Most control algorithms of wave energy converters require prediction of wave elevation or excitation force for a short future horizon, to compute the control in an optimal sense. This paper presents an approach that requires the estimation of the excitation force and its derivatives at present time with no need for prediction. An extended Kalman filter is implemented to estimate the excitation force. The measurements in this approach are selected to be the pressures at discrete points on the buoy surface, in addition to the buoy heave position. The pressures on the buoy surface are more directly related to the excitation force on the buoy as opposed to wave elevation in front of the buoy. These pressure measurements are also more accurate and easier to obtain. A singular arc control is implemented to compute the steady-state control using the estimated excitation force. The estimated excitation force is expressed in the Laplace domain and substituted in the control, before the latter is transformed to the time domain. Numerical simulations are presented for a Bretschneider wave case study.

  1. Improved design of subcritical and supercritical cascades using complex characteristics and boundary layer correction

    NASA Technical Reports Server (NTRS)

    Sanz, J. M.

    1983-01-01

    The method of complex characteristics and hodograph transformation for the design of shockless airfoils was extended to design supercritical cascades with high solidities and large inlet angles. This capability was achieved by introducing a conformal mapping of the hodograph domain onto an ellipse and expanding the solution in terms of Tchebycheff polynomials. A computer code was developd based on this idea. A number of airfoils designed with the code are presented. Various supercritical and subcritical compressor, turbine and propeller sections are shown. The lag-entrainment method for the calculation of a turbulent boundary layer was incorporated to the inviscid design code. The results of this calculation are shown for the airfoils described. The elliptic conformal transformation developed to map the hodograph domain onto an ellipse can be used to generate a conformal grid in the physical domain of a cascade of airfoils with open trailing edges with a single transformation. A grid generated with this transformation is shown for the Korn airfoil.

  2. Taking Proof based Verified Computation a Few Steps Closer to Practicality (extended version)

    DTIC Science & Technology

    2012-06-27

    general s2 + s, in general V’s per-instance CPU costs Issue commit queries (e + 2c) · n/β (e + 2c) · n/β Process commit responses d d Issue PCP...size (# of instances) (§2.3) e: cost of encrypting an element in F d : cost of decrypting an encrypted element f : cost of multiplying in F h: cost of...domain D (such as the integers, Z, or the rationals, Q) to equivalent constraints over a finite field, the programmer or compiler performs 3We suspect

  3. Research on regional numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Kreitzberg, C. W.

    1976-01-01

    Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.

  4. Computer simulation of single-phase nanocrystalline permanent magnets

    NASA Astrophysics Data System (ADS)

    Griffiths, M. K.; Bishop, J. E. L.; Tucker, J. W.; Davies, H. A.

    1998-03-01

    Demagnetizing curves have been calculated numerically for three-dimensional micromagnetic model assemblies of randomly oriented, magnetically hard, exchange coupled, uniaxial nanocrystals as typified by rapidly quenched Nd 2Fe 14B. The curves were obtained as a sequence of static equilibrium states in an incrementally changing applied field. The magnetization distribution in each state was obtained by minimizing the sum of the exchange, anisotropy and Zeeman energies of the assembly, using a modified LaBonte method, with computational elements as small as 1.11 nm (roughly {1}/{4} the domain wall thickness in Nd 2Fe 14B). For computational economy, internal dipolar interactions were ignored in the energy minimization. For a material with the magnetic constants of stoichiometric Nd 2Fe 14B, tests showed that these interactions contribute less than 3% to the energy. On increasing the model grain size from 4.4 to 36 nm, the reduced remanence fell from 76 to 54% and the reduced intrinsic coercivity μ0iHCMS/ KU increased from 0.16 to 0.46 (just under half the Stoner-Wohlfarth value); both sets of results are in reasonable agreement with experimental values. The energy product, evaluated for Nd 2Fe 14B, ranged from ˜224 kJ/m 3 for 10 nm grains to ˜128 kJ/m 3 for 36 nm grains. For grain sizes ⩾20 nm, spatial magnetization variation was confined to domain walls centred on the grain boundaries. For grain sizes decreasing below about twice the domain wall thickness, spatial magnetization variation extended to the interior of the grains and exhibited increasingly long-range correlations.

  5. Alteration of the C-terminal ligand specificity of the erbin PDZ domain by allosteric mutational effects.

    PubMed

    Murciano-Calles, Javier; McLaughlin, Megan E; Erijman, Ariel; Hooda, Yogesh; Chakravorty, Nishant; Martinez, Jose C; Shifman, Julia M; Sidhu, Sachdev S

    2014-10-23

    Modulation of protein binding specificity is important for basic biology and for applied science. Here we explore how binding specificity is conveyed in PDZ (postsynaptic density protein-95/discs large/zonula occludens-1) domains, small interaction modules that recognize various proteins by binding to an extended C terminus. Our goal was to engineer variants of the Erbin PDZ domain with altered specificity for the most C-terminal position (position 0) where a Val is strongly preferred by the wild-type domain. We constructed a library of PDZ domains by randomizing residues in direct contact with position 0 and in a loop that is close to but does not contact position 0. We used phage display to select for PDZ variants that bind to 19 peptide ligands differing only at position 0. To verify that each obtained PDZ domain exhibited the correct binding specificity, we selected peptide ligands for each domain. Despite intensive efforts, we were only able to evolve Erbin PDZ domain variants with selectivity for the aliphatic C-terminal side chains Val, Ile and Leu. Interestingly, many PDZ domains with these three distinct specificities contained identical amino acids at positions that directly contact position 0 but differed in the loop that does not contact position 0. Computational modeling of the selected PDZ domains shows how slight conformational changes in the loop region propagate to the binding site and result in different binding specificities. Our results demonstrate that second-sphere residues could be crucial in determining protein binding specificity. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Relating conformation to function in integrin α5β1.

    PubMed

    Su, Yang; Xia, Wei; Li, Jing; Walz, Thomas; Humphries, Martin J; Vestweber, Dietmar; Cabañas, Carlos; Lu, Chafen; Springer, Timothy A

    2016-07-05

    Whether β1 integrin ectodomains visit conformational states similarly to β2 and β3 integrins has not been characterized. Furthermore, despite a wealth of activating and inhibitory antibodies to β1 integrins, the conformational states that these antibodies stabilize, and the relation of these conformations to function, remain incompletely characterized. Using negative-stain electron microscopy, we show that the integrin α5β1 ectodomain adopts extended-closed and extended-open conformations as well as a bent conformation. Antibodies SNAKA51, 8E3, N29, and 9EG7 bind to different domains in the α5 or β1 legs, activate, and stabilize extended ectodomain conformations. Antibodies 12G10 and HUTS-4 bind to the β1 βI domain and hybrid domains, respectively, activate, and stabilize the open headpiece conformation. Antibody TS2/16 binds a similar epitope as 12G10, activates, and appears to stabilize an open βI domain conformation without requiring extension or hybrid domain swing-out. mAb13 and SG/19 bind to the βI domain and βI-hybrid domain interface, respectively, inhibit, and stabilize the closed conformation of the headpiece. The effects of the antibodies on cell adhesion to fibronectin substrates suggest that the extended-open conformation of α5β1 is adhesive and that the extended-closed and bent-closed conformations are nonadhesive. The functional effects and binding sites of antibodies and fibronectin were consistent with their ability in binding to α5β1 on cell surfaces to cross-enhance or inhibit one another by competitive or noncompetitive (allosteric) mechanisms.

  7. Frequency and time-domain inspiral templates for comparable mass compact binaries in eccentric orbits

    NASA Astrophysics Data System (ADS)

    Tanay, Sashwat; Haney, Maria; Gopakumar, Achamveedu

    2016-03-01

    Inspiraling compact binaries with non-negligible orbital eccentricities are plausible gravitational wave (GW) sources for the upcoming network of GW observatories. In this paper, we present two prescriptions to compute post-Newtonian (PN) accurate inspiral templates for such binaries. First, we adapt and extend the postcircular scheme of Yunes et al. [Phys. Rev. D 80, 084001 (2009)] to obtain a Fourier-domain inspiral approximant that incorporates the effects of PN-accurate orbital eccentricity evolution. This results in a fully analytic frequency-domain inspiral waveform with Newtonian amplitude and 2PN-order Fourier phase while incorporating eccentricity effects up to sixth order at each PN order. The importance of incorporating eccentricity evolution contributions to the Fourier phase in a PN-consistent manner is also demonstrated. Second, we present an accurate and efficient prescription to incorporate orbital eccentricity into the quasicircular time-domain TaylorT4 approximant at 2PN order. New features include the use of rational functions in orbital eccentricity to implement the 1.5PN-order tail contributions to the far-zone fluxes. This leads to closed form PN-accurate differential equations for evolving eccentric orbits, and the resulting time-domain approximant is accurate and efficient to handle initial orbital eccentricities ≤0.9 . Preliminary GW data analysis implications are probed using match estimates.

  8. Saturation of the magnetorotational instability in the unstratified shearing box with zero net flux: convergence in taller boxes

    NASA Astrophysics Data System (ADS)

    Shi, Ji-Ming; Stone, James M.; Huang, Chelsea X.

    2016-03-01

    Previous studies of the non-linear regime of the magnetorotational instability in one particular type of shearing box model - unstratified with no net magnetic flux - find that without explicit dissipation (viscosity and resistivity) the saturation amplitude decreases with increasing numerical resolution. We show that this result is strongly dependent on the vertical aspect ratio of the computational domain Lz/Lx. When Lz/Lx ≲ 1, we recover previous results. However, when the vertical domain is extended Lz/Lx ≳ 2.5, we find the saturation level of the stress is greatly increased (giving a ratio of stress to pressure α ≳ 0.1), and moreover the results are independent of numerical resolution. Consistent with previous results, we find that saturation of the magnetorotational (MRI) in this regime is controlled by a cyclic dynamo which generates patches of strong toroidal field that switches sign on scales of Lx in the vertical direction. We speculate that when Lz/Lx ≲ 1, the dynamo is inhibited by the small size of the vertical domain, leading to the puzzling dependence of saturation amplitude on resolution. We show that previous toy models developed to explain the MRI dynamo are consistent with our results, and that the cyclic pattern of toroidal fields observed in stratified shearing box simulations (leading to the so-called butterfly diagram) may also be related. In tall boxes the saturation amplitude is insensitive to whether or not explicit dissipation is included in the calculations, at least for large magnetic Reynolds and Prandtl number. Finally, we show MRI turbulence in tall domains has a smaller critical Pmc, and an extended lifetime compared to Lz/Lx ≲ 1 boxes.

  9. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  10. Peer Review-Based Scripted Collaboration to Support Domain-Specific and Domain-General Knowledge Acquisition in Computer Science

    ERIC Educational Resources Information Center

    Demetriadis, Stavros; Egerter, Tina; Hanisch, Frank; Fischer, Frank

    2011-01-01

    This study investigates the effectiveness of using peer review in the context of scripted collaboration to foster both domain-specific and domain-general knowledge acquisition in the computer science domain. Using a one-factor design with a script and a control condition, students worked in small groups on a series of computer science problems…

  11. 37 CFR 201.26 - Recordation of documents pertaining to computer shareware and donation of public domain computer...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pertaining to computer shareware and donation of public domain computer software. 201.26 Section 201.26... of public domain computer software. (a) General. This section prescribes the procedures for... software under section 805 of Public Law 101-650, 104 Stat. 5089 (1990). Documents recorded in the...

  12. A methodology for extending domain coverage in SemRep.

    PubMed

    Rosemblat, Graciela; Shin, Dongwook; Kilicoglu, Halil; Sneiderman, Charles; Rindflesch, Thomas C

    2013-12-01

    We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep's application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. Published by Elsevier Inc.

  13. From computers to cultivation: reconceptualizing evolutionary psychology.

    PubMed

    Barrett, Louise; Pollet, Thomas V; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on "cognitive integration" or the "extended mind hypothesis" in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human "mind-making" within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach.

  14. Time-Domain Computation Of Electromagnetic Fields In MMICs

    NASA Technical Reports Server (NTRS)

    Lansing, Faiza S.; Rascoe, Daniel L.

    1995-01-01

    Maxwell's equations solved on three-dimensional, conformed orthogonal grids by finite-difference techniques. Method of computing frequency-dependent electrical parameters of monolithic microwave integrated circuit (MMIC) involves time-domain computation of propagation of electromagnetic field in response to excitation by single pulse at input terminal, followed by computation of Fourier transforms to obtain frequency-domain response from time-domain response. Parameters computed include electric and magnetic fields, voltages, currents, impedances, scattering parameters, and effective dielectric constants. Powerful and efficient means for analyzing performance of even complicated MMIC.

  15. Structure-function analysis of the auxilin J-domain reveals an extended Hsc70 interaction interface.

    PubMed

    Jiang, Jianwen; Taylor, Alexander B; Prasad, Kondury; Ishikawa-Brush, Yumiko; Hart, P John; Lafer, Eileen M; Sousa, Rui

    2003-05-20

    J-domains are widespread protein interaction modules involved in recruiting and stimulating the activity of Hsp70 family chaperones. We have determined the crystal structure of the J-domain of auxilin, a protein which is involved in uncoating clathrin-coated vesicles. Comparison to the known structures of J-domains from four other proteins reveals that the auxilin J-domain is the most divergent of all J-domain structures described to date. In addition to the canonical J-domain features described previously, the auxilin J-domain contains an extra N-terminal helix and a long loop inserted between helices I and II. The latter loop extends the positively charged surface which forms the Hsc70 binding site, and is shown by directed mutagenesis and surface plasmon resonance to contain side chains important for binding to Hsc70.

  16. Terascale Optimal PDE Simulations (TOPS) Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Professor Olof B. Widlund

    2007-07-09

    Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part ofmore » the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in [24, 25, 26]. This work is based on [29, 31]. Our work over these five and half years has, in our opinion, helped advance the knowledge of domain decomposition methods significantly. We see these methods as providing valuable alternatives to other iterative methods, in particular, those based on multi-grid. In our opinion, our accomplishments also match the goals of the TOPS project quite closely.« less

  17. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  18. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  19. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  20. Design criteria and eigensequence plots for satellite computed tomography

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1983-01-01

    The use of the degrees of freedom for signal is proposed as a design criteria for comparing different designs for satellite and other measuring systems. It is also proposed that certain eigensequence plots be examined at the design stage along with appropriate estimates of the parameter lambda playing the role of noise to signal ratio. The degrees of freedom for signal and the eigensequence plots may be determined using prior information in the spectral domain which is presently available along with a description of the system, and simulated data for estimating lambda. This work extends the 1972 work of Weinreb and Crosby.

  1. Design criteria and eigensequence plots for satellite computed tomography

    NASA Astrophysics Data System (ADS)

    Wahba, G.

    1983-11-01

    The use of the degrees of freedom for signal is proposed as a design criteria for comparing different designs for satellite and other measuring systems. It is also proposed that certain eigensequence plots be examined at the design stage along with appropriate estimates of the parameter lambda playing the role of noise to signal ratio. The degrees of freedom for signal and the eigensequence plots may be determined using prior information in the spectral domain which is presently available along with a description of the system, and simulated data for estimating lambda. This work extends the 1972 work of Weinreb and Crosby.

  2. Nonlinear (time domain) and linearized (time and frequency domain) solutions to the compressible Euler equations in conservation law form

    NASA Technical Reports Server (NTRS)

    Sreenivas, Kidambi; Whitfield, David L.

    1995-01-01

    Two linearized solvers (time and frequency domain) based on a high resolution numerical scheme are presented. The basic approach is to linearize the flux vector by expressing it as a sum of a mean and a perturbation. This allows the governing equations to be maintained in conservation law form. A key difference between the time and frequency domain computations is that the frequency domain computations require only one grid block irrespective of the interblade phase angle for which the flow is being computed. As a result of this and due to the fact that the governing equations for this case are steady, frequency domain computations are substantially faster than the corresponding time domain computations. The linearized equations are used to compute flows in turbomachinery blade rows (cascades) arising due to blade vibrations. Numerical solutions are compared to linear theory (where available) and to numerical solutions of the nonlinear Euler equations.

  3. The Structure of the Plakin Domain of Plectin Reveals an Extended Rod-like Shape*

    PubMed Central

    Carballido, Ana M.

    2016-01-01

    Plakins are large multi-domain proteins that interconnect cytoskeletal structures. Plectin is a prototypical plakin that tethers intermediate filaments to membrane-associated complexes. Most plakins contain a plakin domain formed by up to nine spectrin repeats (SR1–SR9) and an SH3 domain. The plakin domains of plectin and other plakins harbor binding sites for junctional proteins. We have combined x-ray crystallography with small angle x-ray scattering (SAXS) to elucidate the structure of the plakin domain of plectin, extending our previous analysis of the SR1 to SR5 region. Two crystal structures of the SR5-SR6 region allowed us to characterize its uniquely wide inter-repeat conformational variability. We also report the crystal structures of the SR7-SR8 region, refined to 1.8 Å, and the SR7–SR9 at lower resolution. The SR7–SR9 region, which is conserved in all other plakin domains, forms a rigid segment stabilized by uniquely extensive inter-repeat contacts mediated by unusually long helices in SR8 and SR9. Using SAXS we show that in solution the SR3–SR6 and SR7–SR9 regions are rod-like segments and that SR3–SR9 of plectin has an extended shape with a small central kink. Other plakins, such as bullous pemphigoid antigen 1 and microtubule and actin cross-linking factor 1, are likely to have similar extended plakin domains. In contrast, desmoplakin has a two-segment structure with a central flexible hinge. The continuous versus segmented structures of the plakin domains of plectin and desmoplakin give insight into how different plakins might respond to tension and transmit mechanical signals. PMID:27413182

  4. Multi-region approach to free-boundary three-dimensional tokamak equilibria and resistive wall instabilities

    NASA Astrophysics Data System (ADS)

    Ferraro, N. M.; Jardin, S. C.; Lao, L. L.; Shephard, M. S.; Zhang, F.

    2016-05-01

    Free-boundary 3D tokamak equilibria and resistive wall instabilities are calculated using a new resistive wall model in the two-fluid M3D-C1 code. In this model, the resistive wall and surrounding vacuum region are included within the computational domain. This implementation contrasts with the method typically used in fluid codes in which the resistive wall is treated as a boundary condition on the computational domain boundary and has the advantage of maintaining purely local coupling of mesh elements. This new capability is used to simulate perturbed, free-boundary non-axisymmetric equilibria; the linear evolution of resistive wall modes; and the linear and nonlinear evolution of axisymmetric vertical displacement events (VDEs). Calculated growth rates for a resistive wall mode with arbitrary wall thickness are shown to agree well with the analytic theory. Equilibrium and VDE calculations are performed in diverted tokamak geometry, at physically realistic values of dissipation, and with resistive walls of finite width. Simulations of a VDE disruption extend into the current-quench phase, in which the plasma becomes limited by the first wall, and strong currents are observed to flow in the wall, in the SOL, and from the plasma to the wall.

  5. DICOM relay over the cloud.

    PubMed

    Silva, Luís A Bastião; Costa, Carlos; Oliveira, José Luis

    2013-05-01

    Healthcare institutions worldwide have adopted picture archiving and communication system (PACS) for enterprise access to images, relying on Digital Imaging Communication in Medicine (DICOM) standards for data exchange. However, communication over a wider domain of independent medical institutions is not well standardized. A DICOM-compliant bridge was developed for extending and sharing DICOM services across healthcare institutions without requiring complex network setups or dedicated communication channels. A set of DICOM routers interconnected through a public cloud infrastructure was implemented to support medical image exchange among institutions. Despite the advantages of cloud computing, new challenges were encountered regarding data privacy, particularly when medical data are transmitted over different domains. To address this issue, a solution was introduced by creating a ciphered data channel between the entities sharing DICOM services. Two main DICOM services were implemented in the bridge: Storage and Query/Retrieve. The performance measures demonstrated it is quite simple to exchange information and processes between several institutions. The solution can be integrated with any currently installed PACS-DICOM infrastructure. This method works transparently with well-known cloud service providers. Cloud computing was introduced to augment enterprise PACS by providing standard medical imaging services across different institutions, offering communication privacy and enabling creation of wider PACS scenarios with suitable technical solutions.

  6. A hierarchical, ontology-driven Bayesian concept for ubiquitous medical environments--a case study for pulmonary diseases.

    PubMed

    Maragoudakis, Manolis; Lymberopoulos, Dimitrios; Fakotakis, Nikos; Spiropoulos, Kostas

    2008-01-01

    The present paper extends work on an existing computer-based Decision Support System (DSS) that aims to provide assistance to physicians as regards to pulmonary diseases. The extension deals with allowing for a hierarchical decomposition of the task, at different levels of domain granularity, using a novel approach, i.e. Hierarchical Bayesian Networks. The proposed framework uses data from various networking appliances such as mobile phones and wireless medical sensors to establish a ubiquitous environment for medical treatment of pulmonary diseases. Domain knowledge is encoded at the upper levels of the hierarchy, thus making the process of generalization easier to accomplish. The experimental results were carried out under the Pulmonary Department, University Regional Hospital Patras, Patras, Greece. They have supported our initial beliefs about the ability of Bayesian networks to provide an effective, yet semantically-oriented, means of prognosis and reasoning under conditions of uncertainty.

  7. An efficient method for the calculation of mean extinction. I - The analyticity of the complex extinction efficiency of homogeneous spheres

    NASA Astrophysics Data System (ADS)

    Xing, Zhang-Fan; Greenberg, J. M.

    1992-11-01

    Results of an investigation of the analyticity of the complex extinction efficiency Q-tilde(ext) in different parameter domains are presented. In the size parameter domain, x = omega(a/c), numerical Hilbert transforms are used to study the analyticity properties of Q-tilde(ext) for homogeneous spheres. Q-tilde(ext) is found to be analytic in the entire lower complex x-tilde-plane when the refractive index, m, is fixed as a real constant (pure scattering) or infinity (perfect conductor); poles, however, appear in the left side of the lower complex x-tilde-plane as m becomes complex. The computation of the mean extinction produced by an extended size distribution of particles may be conveniently and accurately approximated using only a few values of the complex extinction evaluated in the complex plane.

  8. Serial interactome capture of the human cell nucleus.

    PubMed

    Conrad, Thomas; Albrecht, Anne-Susann; de Melo Costa, Veronica Rodrigues; Sauer, Sascha; Meierhofer, David; Ørom, Ulf Andersson

    2016-04-04

    Novel RNA-guided cellular functions are paralleled by an increasing number of RNA-binding proteins (RBPs). Here we present 'serial RNA interactome capture' (serIC), a multiple purification procedure of ultraviolet-crosslinked poly(A)-RNA-protein complexes that enables global RBP detection with high specificity. We apply serIC to the nuclei of proliferating K562 cells to obtain the first human nuclear RNA interactome. The domain composition of the 382 identified nuclear RBPs markedly differs from previous IC experiments, including few factors without known RNA-binding domains that are in good agreement with computationally predicted RNA binding. serIC extends the number of DNA-RNA-binding proteins (DRBPs), and reveals a network of RBPs involved in p53 signalling and double-strand break repair. serIC is an effective tool to couple global RBP capture with additional selection or labelling steps for specific detection of highly purified RBPs.

  9. Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coon, Ethan; Porter, Mark L.; Kang, Qinjun

    2012-06-18

    In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less

  10. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  11. Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2015-07-01

    Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.

  12. A UML profile for the OBO relation ontology.

    PubMed

    Guardia, Gabriela D A; Vêncio, Ricardo Z N; de Farias, Cléver R G

    2012-01-01

    Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain.

  13. Predicting chroma from luma with frequency domain intra prediction

    NASA Astrophysics Data System (ADS)

    Egge, Nathan E.; Valin, Jean-Marc

    2015-03-01

    This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.

  14. Human immunoglobulin E flexes between acutely bent and extended conformations

    PubMed Central

    Keeble, Anthony H; Wright, Michael; Cain, Katharine; Hailu, Hanna; Oxbrow, Amanda; Delgado, Jean; Shuttleworth, Lindsay K; Kao, Michael W-P; McDonnell, James M; Beavil, Andrew J; Henry, Alistair J; Sutton, Brian J

    2014-01-01

    Crystallographic and solution studies have shown that IgE molecules are acutely bent in their Fc region. Crystal structures reveal the Cε2 domain pair folded back onto the Cε3-Cε4 domains, but is the molecule exclusively bent or can the Cε2 domains adopt extended conformations and even “flip” from one side of the molecule to the other? We report the crystal structure of IgE-Fc captured in a fully extended, symmetrical conformation and show by molecular dynamics, calorimetry, stopped-flow kinetic, SPR and FRET analyses, that the antibody can indeed adopt such extended conformations in solution. This diversity of conformational states available to IgE-Fc offers a new perspective on IgE function in allergen recognition, as part of the B cell receptor and as a therapeutic target in allergic disease. PMID:24632569

  15. The Application of Deterministic Spectral Domain Method to the Analysis of Planar Circuit Discontinuities on Open Substrates

    DTIC Science & Technology

    1990-08-01

    the spectral domain is extended to include the effects of two-dimensional, two-component current flow in planar transmission line discontinuities 6n...PROFESSOR: Tatsuo Itoh A deterministic formulation of the method of moments carried out in the spectral domain is extended to include the effects of...two-dimensional, two- component current flow in planar transmission line discontinuities on open substrates. The method includes the effects of space

  16. Compressional Alfvén eigenmodes in rotating spherical tokamak plasmas

    DOE PAGES

    Smith, H. M.; Fredrickson, E. D.

    2017-02-07

    Spherical tokamaks often have a considerable toroidal plasma rotation of several tens of kHz. Compressional Alfvén eigenmodes in such devices therefore experience a frequency shift, which if the plasma were rotating as a rigid body, would be a simple Doppler shift. However, since the rotation frequency depends on minor radius, the eigenmodes are affected in a more complicated way. The eigenmode solver CAE3B (Smith et al 2009 Plasma Phys. Control. Fusion 51 075001) has been extended to account for toroidal plasma rotation. The results show that the eigenfrequency shift due to rotation can be approximated by a rigid body rotationmore » with a frequency computed from a spatial average of the real rotation profile weighted with the eigenmode amplitude. To investigate the effect of extending the computational domain to the vessel wall, a simplified eigenmode equation, yet retaining plasma rotation, is solved by a modified version of the CAE code used in Fredrickson et al (2013 Phys. Plasmas 20 042112). Lastly, both solving the full eigenmode equation, as in the CAE3B code, and placing the boundary at the vessel wall, as in the CAE code, significantly influences the calculated eigenfrequencies.« less

  17. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  18. Maximum-likelihood-based extended-source spatial acquisition and tracking for planetary optical communications

    NASA Astrophysics Data System (ADS)

    Tsou, Haiping; Yan, Tsun-Yee

    1999-04-01

    This paper describes an extended-source spatial acquisition and tracking scheme for planetary optical communications. This scheme uses the Sun-lit Earth image as the beacon signal, which can be computed according to the current Sun-Earth-Probe angle from a pre-stored Earth image or a received snapshot taken by other Earth-orbiting satellite. Onboard the spacecraft, the reference image is correlated in the transform domain with the received image obtained from a detector array, which is assumed to have each of its pixels corrupted by an independent additive white Gaussian noise. The coordinate of the ground station is acquired and tracked, respectively, by an open-loop acquisition algorithm and a closed-loop tracking algorithm derived from the maximum likelihood criterion. As shown in the paper, the optimal spatial acquisition requires solving two nonlinear equations, or iteratively solving their linearized variants, to estimate the coordinate when translation in the relative positions of onboard and ground transceivers is considered. Similar assumption of linearization leads to the closed-loop spatial tracking algorithm in which the loop feedback signals can be derived from the weighted transform-domain correlation. Numerical results using a sample Sun-lit Earth image demonstrate that sub-pixel resolutions can be achieved by this scheme in a high disturbance environment.

  19. The GENIUS Grid Portal and robot certificates: a new tool for e-Science

    PubMed Central

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-01-01

    Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747

  20. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    PubMed

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.

  1. Gravitational perturbations and metric reconstruction: Method of extended homogeneous solutions applied to eccentric orbits on a Schwarzschild black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopper, Seth; Evans, Charles R.

    2010-10-15

    We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less

  2. A hybrid hydrostatic and non-hydrostatic numerical model for shallow flow simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Jingxin; Liang, Dongfang; Liu, Hua

    2018-05-01

    Hydrodynamics of geophysical flows in oceanic shelves, estuaries, and rivers, are often studied by solving shallow water model equations. Although hydrostatic models are accurate and cost efficient for many natural flows, there are situations where the hydrostatic assumption is invalid, whereby a fully hydrodynamic model is necessary to increase simulation accuracy. There is a growing concern about the decrease of the computational cost of non-hydrostatic pressure models to improve the range of their applications in large-scale flows with complex geometries. This study describes a hybrid hydrostatic and non-hydrostatic model to increase the efficiency of simulating shallow water flows. The basic numerical model is a three-dimensional hydrostatic model solved by the finite volume method (FVM) applied to unstructured grids. Herein, a second-order total variation diminishing (TVD) scheme is adopted. Using a predictor-corrector method to calculate the non-hydrostatic pressure, we extended the hydrostatic model to a fully hydrodynamic model. By localising the computational domain in the corrector step for non-hydrostatic pressure calculations, a hybrid model was developed. There was no prior special treatment on mode switching, and the developed numerical codes were highly efficient and robust. The hybrid model is applicable to the simulation of shallow flows when non-hydrostatic pressure is predominant only in the local domain. Beyond the non-hydrostatic domain, the hydrostatic model is still accurate. The applicability of the hybrid method was validated using several study cases.

  3. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE PAGES

    Huang, Qiuhua; Vittal, Vijay

    2018-05-09

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  4. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Qiuhua; Vittal, Vijay

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  5. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  6. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    NASA Astrophysics Data System (ADS)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  7. Problem Solving and Computational Skill: Are They Shared or Distinct Aspects of Mathematical Cognition?

    PubMed Central

    Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.

    2009-01-01

    The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912

  8. Structural Assembly of Multidomain Proteins and Protein Complexes Guided by the Overall Rotational Diffusion Tensor

    PubMed Central

    Ryabov, Yaroslav; Fushman, David

    2008-01-01

    We present a simple and robust approach that uses the overall rotational diffusion tensor as a structural constraint for domain positioning in multidomain proteins and protein-protein complexes. This method offers the possibility to use NMR relaxation data for detailed structure characterization of such systems provided the structures of individual domains are available. The proposed approach extends the concept of using long-range information contained in the overall rotational diffusion tensor. In contrast to the existing approaches, we use both the principal axes and principal values of protein’s rotational diffusion tensor to determine not only the orientation but also the relative positioning of the individual domains in a protein. This is achieved by finding the domain arrangement in a molecule that provides the best possible agreement with all components of the overall rotational diffusion tensor derived from experimental data. The accuracy of the proposed approach is demonstrated for two protein systems with known domain arrangement and parameters of the overall tumbling: the HIV-1 protease homodimer and Maltose Binding Protein. The accuracy of the method and its sensitivity to domain positioning is also tested using computer-generated data for three protein complexes, for which the experimental diffusion tensors are not available. In addition, the proposed method is applied here to determine, for the first time, the structure of both open and closed conformations of Lys48-linked di-ubiquitin chain, where domain motions render impossible accurate structure determination by other methods. The proposed method opens new avenues for improving structure characterization of proteins in solution. PMID:17550252

  9. Preliminary frequency-domain analysis for the reconstructed spatial resolution of muon tomography

    NASA Astrophysics Data System (ADS)

    Yu, B.; Zhao, Z.; Wang, X.; Wang, Y.; Wu, D.; Zeng, Z.; Zeng, M.; Yi, H.; Luo, Z.; Yue, X.; Cheng, J.

    2014-11-01

    Muon tomography is an advanced technology to non-destructively detect high atomic number materials. It exploits the multiple Coulomb scattering information of muon to reconstruct the scattering density image of the traversed object. Because of the statistics of muon scattering, the measurement error of system and the data incompleteness, the reconstruction is always accompanied with a certain level of interference, which will influence the reconstructed spatial resolution. While statistical noises can be reduced by extending the measuring time, system parameters determine the ultimate spatial resolution that one system can reach. In this paper, an effective frequency-domain model is proposed to analyze the reconstructed spatial resolution of muon tomography. The proposed method modifies the resolution analysis in conventional computed tomography (CT) to fit the different imaging mechanism in muon scattering tomography. The measured scattering information is described in frequency domain, then a relationship between the measurements and the original image is proposed in Fourier domain, which is named as "Muon Central Slice Theorem". Furthermore, a preliminary analytical expression of the ultimate reconstructed spatial is derived, and the simulations are performed for validation. While the method is able to predict the ultimate spatial resolution of a given system, it can also be utilized for the optimization of system design and construction.

  10. Internal Domains of Natural Porous Media Revealed: Critical Locations for Transport, Storage, and Chemical Reaction

    DOE PAGES

    Zachara, John; Brantley, Sue; Chorover, Jon; ...

    2016-02-05

    Internal pore domains exist within rocks, lithic fragments, subsurface sediments, and soil aggregates. These domains, termed internal domains in porous media (IDPM), represent a subset of a material’s porosity, contain a significant fraction of their porosity as nanopores, dominate the reactive surface area of diverse media types, and are important locations for chemical reactivity and fluid storage. IDPM are key features controlling hydrocarbon release from shales in hydraulic fracture systems, organic matter decomposition in soil, weathering and soil formation, and contaminant behavior in the vadose zone and groundwater. It is traditionally difficult to interrogate, advances in instrumentation and imaging methodsmore » are providing new insights on the physical structures and chemical attributes of IDPM, and their contributions to system behaviors. We discuss analytical methods to characterize IDPM, evaluate information on their size distributions, connectivity, and extended structures; determine whether they exhibit unique chemical reactivity; and assess the potential for their inclusion in reactive transport models. Moreover, ongoing developments in measurement technologies and sensitivity, and computer-assisted interpretation will improve understanding of these critical features in the future. Finally, impactful research opportunities exist to advance understanding of IDPM, and to incorporate their effects in reactive transport models for improved environmental simulation and prediction.« less

  11. Protein unfolding under isometric tension-what force can integrins generate, and can it unfold FNIII domains?

    PubMed

    Erickson, Harold P

    2017-02-01

    Extracellular matrix fibrils of fibronectin (FN) are highly elastic, and are typically stretched three to four times their relaxed length. The mechanism of stretching has been controversial, in particular whether it involves tension-induced unfolding of FNIII domains. Recent studies have found that ∼5pN is the threshold isometric force for unfolding various protein domains. FNIII domains should therefore not be unfolded until the tension approaches 5pN. Integrins have been reported to generate forces ranging from 1 to >50pN, but I argue that studies reporting 1-2pN are the most convincing. This is not enough to unfold FNIII domains. Even if domains were unfolded, 2pN would only extend the worm-like-chain to about twice the length of the folded domain. Overall I conclude that stretching FN matrix fibrils involves primarily the compact to extended conformational change of FN dimers, with minimal contribution from unfolding FNIII domains. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Non-local features of a hydrodynamic pilot-wave system

    NASA Astrophysics Data System (ADS)

    Nachbin, Andre; Couchman, Miles; Bush, John

    2016-11-01

    A droplet walking on the surface of a vibrating fluid bath constitutes a pilot-wave system of the form envisaged for quantum dynamics by Louis de Broglie: a particle moves in resonance with its guiding wave field. We here present an examination of pilot-wave hydrodynamics in a confined domain. Specifically, we present a one-dimensional water wave model that describes droplets walking in single and multiple cavities. The cavities are separated by a submerged barrier, and so allow for the study of tunneling. They also highlight the non-local dynamical features arising due to the spatially-extended wave field. Results from computational simulations are complemented by laboratory experiments.

  13. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  14. Design criteria and eigensequence plots for satellite-computed tomography. [in meteorology

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1985-01-01

    The use of the degrees of freedom for signal is proposed as a design criteria for comparing different designs for satellite and other measuring systems. It is also proposed that certain eigensequence plots be examined at the design stage along with appropriate estimates of the parameter lambda playing the role of noise to signal ratio. The degrees of freedom for signal and the eigensequence plots may be determined using prior information in the spectral domain which is presently available along with a description of the system, and simulated data for estimating lambda. This work extends the 1972 work of Weinreb and Crosby.

  15. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  16. Kinematics of reflections in subsurface offset and angle-domain image gathers

    NASA Astrophysics Data System (ADS)

    Dafni, Raanan; Symes, William W.

    2018-05-01

    Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.

  17. Hydrolysis by somatic angiotensin-I converting enzyme of basic dipeptides from a cholecystokinin/gastrin and a LH-RH peptide extended at the C-terminus with gly-Arg/Lys-arg, but not from diarginyl insulin.

    PubMed

    Isaac, R E; Michaud, A; Keen, J N; Williams, T A; Coates, D; Wetsel, W C; Corvol, P

    1999-06-01

    Endoproteolytic cleavage of protein prohormones often generates intermediates extended at the C-terminus by Arg-Arg or Lys-Arg, the removal of which by a carboxypeptidase (CPE) is normally an important step in the maturation of many peptide hormones. Recent studies in mice that lack CP activity indicate the existence of alternative tissue or plasma enzymes capable of removing C-terminal basic residues from prohormone intermediates. Using inhibitors of angiotensin I-converting enzyme (ACE) and CP, we show that both these enzymes in mouse serum can remove the basic amino acids from the C-terminus of CCK5-GRR and LH-RH-GKR, but only CP is responsible for converting diarginyl insulin to insulin. ACE activity removes C-terminal dipeptides to generate the Gly-extended peptides, whereas CP hydrolysis gives rise to CCK5-GR and LH-RH-GK, both of which are susceptible to the dipeptidyl carboxypeptidase activity of ACE. Somatic ACE has two similar protein domains (the N-domain and the C-domain), each with an active site that can display different substrate specificities. CCK5-GRR is a high-affinity substrate for both the N-domain and C-domain active sites of human sACE (Km of 9.4 microm and 9.0 microm, respectively) with the N-domain showing greater efficiency (kcat : Km ratio of 2.6 in favour of the N-domain). We conclude that somatic forms of ACE should be considered as alternatives to CPs for the removal of basic residues from some Arg/Lys-extended peptides.

  18. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    NASA Technical Reports Server (NTRS)

    Baker, Gregory; Siegel, Michael; Tanveer, Saleh

    1995-01-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.

  19. Spatio-temporal colour correction of strongly degraded movies

    NASA Astrophysics Data System (ADS)

    Islam, A. B. M. Tariqul; Farup, Ivar

    2011-01-01

    The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.

  20. A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TETRAHEDRAL DOMAINS

    PubMed Central

    Fu, Zhisong; Kirby, Robert M.; Whitaker, Ross T.

    2014-01-01

    Generating numerical solutions to the eikonal equation and its many variations has a broad range of applications in both the natural and computational sciences. Efficient solvers on cutting-edge, parallel architectures require new algorithms that may not be theoretically optimal, but that are designed to allow asynchronous solution updates and have limited memory access patterns. This paper presents a parallel algorithm for solving the eikonal equation on fully unstructured tetrahedral meshes. The method is appropriate for the type of fine-grained parallelism found on modern massively-SIMD architectures such as graphics processors and takes into account the particular constraints and capabilities of these computing platforms. This work builds on previous work for solving these equations on triangle meshes; in this paper we adapt and extend previous two-dimensional strategies to accommodate three-dimensional, unstructured, tetrahedralized domains. These new developments include a local update strategy with data compaction for tetrahedral meshes that provides solutions on both serial and parallel architectures, with a generalization to inhomogeneous, anisotropic speed functions. We also propose two new update schemes, specialized to mitigate the natural data increase observed when moving to three dimensions, and the data structures necessary for efficiently mapping data to parallel SIMD processors in a way that maintains computational density. Finally, we present descriptions of the implementations for a single CPU, as well as multicore CPUs with shared memory and SIMD architectures, with comparative results against state-of-the-art eikonal solvers. PMID:25221418

  1. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  2. The graph neural network model.

    PubMed

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.

  3. New approach for logo recognition

    NASA Astrophysics Data System (ADS)

    Chen, Jingying; Leung, Maylor K. H.; Gao, Yongsheng

    2000-03-01

    The problem of logo recognition is of great interest in the document domain, especially for document database. By recognizing the logo we obtain semantic information about the document which may be useful in deciding whether or not to analyze the textual components. In order to develop a logo recognition method that is efficient to compute and product intuitively reasonable results, we investigate the Line Segment Hausdorff Distance on logo recognition. Researchers apply Hausdorff Distance to measure the dissimilarity of two point sets. It has been extended to match two sets of line segments. The new approach has the advantage to incorporate structural and spatial information to compute the dissimilarity. The added information can conceptually provide more and better distinctive capability for recognition. The proposed technique has been applied on line segments of logos with encouraging results that support the concept experimentally. This might imply a new way for logo recognition.

  4. Preferential Concentration Of Solid Particles In Turbulent Horizontal Circular Pipe Flow

    NASA Astrophysics Data System (ADS)

    Kim, Jaehee; Yang, Kyung-Soo

    2017-11-01

    In particle-laden turbulent pipe flow, turbophoresis can lead to a preferential concentration of particles near the wall. To investigate this phenomenon, one-way coupled Direct Numerical Simulation (DNS) has been performed. Fully-developed turbulent pipe flow of the carrier fluid (air) is at Reτ = 200 based on the pipe radius and the mean friction velocity, whereas the Stokes numbers of the particles (solid) are St+ = 0.1 , 1 , 10 based on the mean friction velocity and the kinematic viscosity of the fluid. The computational domain for particle simulation is extended along the axial direction by duplicating the domain of the fluid simulation. By doing so, particle statistics in the spatially developing region as well as in the fully-developed region can be obtained. Accumulation of particles has been noticed at St+ = 1 and 10 mostly in the viscous sublayer, more intensive in the latter case. Compared with other authors' previous results, our results suggest that drag force on the particles should be computed by using an empirical correlation and a higher-order interpolation scheme even in a low-Re regime in order to improve the accuracy of particle simulation. This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (No. 2015R1A2A2A01002981).

  5. Multi-region approach to free-boundary three-dimensional tokamak equilibria and resistive wall instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, N. M.; Jardin, S. C.; Lao, L. L.

    Free-boundary 3D tokamak equilibria and resistive wall instabilities are calculated using a new resistive wall model in the two-fluid M3D-C1 code. In this model, the resistive wall and surround- ing vacuum region are included within the computational domain. Our implementation contrasts with the method typically used in fluid codes in which the resistive wall is treated as a boundary condition on the computational domain boundary and has the advantage of maintaining purely local coupling of mesh elements. We use this new capability to simulate perturbed, free-boundary non- axisymmetric equilibria; the linear evolution of resistive wall modes; and the linear andmore » nonlinear evolution of axisymmetric vertical displacement events (VDEs). Calculated growth rates for a resistive wall mode with arbitrary wall thickness are shown to agree well with the analytic theory. Equilibrium and VDE calculations are performed in diverted tokamak geometry, at physically real- istic values of dissipation, and with resistive walls of finite width. Simulations of a VDE disruption extend into the current-quench phase, in which the plasma becomes limited by the first wall, and strong currents are observed to flow in the wall, in the SOL, and from the plasma to the wall.« less

  6. Multi-region approach to free-boundary three-dimensional tokamak equilibria and resistive wall instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, N. M., E-mail: nferraro@pppl.gov; Lao, L. L.; Jardin, S. C.

    Free-boundary 3D tokamak equilibria and resistive wall instabilities are calculated using a new resistive wall model in the two-fluid M3D-C1 code. In this model, the resistive wall and surrounding vacuum region are included within the computational domain. This implementation contrasts with the method typically used in fluid codes in which the resistive wall is treated as a boundary condition on the computational domain boundary and has the advantage of maintaining purely local coupling of mesh elements. This new capability is used to simulate perturbed, free-boundary non-axisymmetric equilibria; the linear evolution of resistive wall modes; and the linear and nonlinear evolutionmore » of axisymmetric vertical displacement events (VDEs). Calculated growth rates for a resistive wall mode with arbitrary wall thickness are shown to agree well with the analytic theory. Equilibrium and VDE calculations are performed in diverted tokamak geometry, at physically realistic values of dissipation, and with resistive walls of finite width. Simulations of a VDE disruption extend into the current-quench phase, in which the plasma becomes limited by the first wall, and strong currents are observed to flow in the wall, in the SOL, and from the plasma to the wall.« less

  7. Spectrally formulated user-defined element in conventional finite element environment for wave motion analysis in 2-D composite structures

    NASA Astrophysics Data System (ADS)

    Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip

    2016-11-01

    Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.

  8. Multi-region approach to free-boundary three-dimensional tokamak equilibria and resistive wall instabilities

    DOE PAGES

    Ferraro, N. M.; Jardin, S. C.; Lao, L. L.; ...

    2016-05-20

    Free-boundary 3D tokamak equilibria and resistive wall instabilities are calculated using a new resistive wall model in the two-fluid M3D-C1 code. In this model, the resistive wall and surround- ing vacuum region are included within the computational domain. Our implementation contrasts with the method typically used in fluid codes in which the resistive wall is treated as a boundary condition on the computational domain boundary and has the advantage of maintaining purely local coupling of mesh elements. We use this new capability to simulate perturbed, free-boundary non- axisymmetric equilibria; the linear evolution of resistive wall modes; and the linear andmore » nonlinear evolution of axisymmetric vertical displacement events (VDEs). Calculated growth rates for a resistive wall mode with arbitrary wall thickness are shown to agree well with the analytic theory. Equilibrium and VDE calculations are performed in diverted tokamak geometry, at physically real- istic values of dissipation, and with resistive walls of finite width. Simulations of a VDE disruption extend into the current-quench phase, in which the plasma becomes limited by the first wall, and strong currents are observed to flow in the wall, in the SOL, and from the plasma to the wall.« less

  9. Knowledge Representation and Ontologies

    NASA Astrophysics Data System (ADS)

    Grimm, Stephan

    Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

  10. Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.

    2017-12-01

    Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.

  11. Computer-Delivered Interventions for Health Promotion and Behavioral Risk Reduction: A Meta-Analysis of 75 Randomized Controlled Trials, 1988 – 2007

    PubMed Central

    Portnoy, David B.; Scott-Sheldon, Lori A. J.; Johnson, Blair T.; Carey, Michael P.

    2008-01-01

    Objective Use of computers to promote healthy behavior is increasing. To evaluate the efficacy of these computer-delivered interventions, we conducted a meta-analysis of the published literature. Method Studies examining health domains related to the leading health indicators outlined in Healthy People 2010 were selected. Data from 75 randomized controlled trials, published between 1988 and 2007, with 35,685 participants and 82 separate interventions were included. All studies were coded independently by two raters for study and participant characteristics, design and methodology, and intervention content. We calculated weighted mean effect sizes for theoretically-meaningful psychosocial and behavioral outcomes; moderator analyses determined the relation between study characteristics and the magnitude of effect sizes for heterogeneous outcomes. Results Compared with controls, participants who received a computer-delivered intervention improved several hypothesized antecedents of health behavior (knowledge, attitudes, intentions); intervention recipients also improved health behaviors (nutrition, tobacco use, substance use, safer sexual behavior, binge/purge behaviors) and general health maintenance. Several sample, study and intervention characteristics moderated the psychosocial and behavioral outcomes. Conclusion Computer-delivered interventions can lead to improved behavioral health outcomes at first post-intervention assessment. Interventions evaluating outcomes at extended assessment periods are needed to evaluate the longer-term efficacy of computer-delivered interventions. PMID:18403003

  12. Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.

    2006-11-01

    Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.

  13. Learning to Rank Figures within a Biomedical Article

    PubMed Central

    Liu, Feifan; Yu, Hong

    2014-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the “bag of figures” assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as “figure ranking”. Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out-domain-experts for an article, we conclude that BioFigRank represents an artificial intelligence system that offers expert-level intelligence to help biomedical researchers to navigate increasingly proliferated big data efficiently. PMID:24625719

  14. Learning to rank figures within a biomedical article.

    PubMed

    Liu, Feifan; Yu, Hong

    2014-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out-domain-experts for an article, we conclude that BioFigRank represents an artificial intelligence system that offers expert-level intelligence to help biomedical researchers to navigate increasingly proliferated big data efficiently.

  15. Sub-domain methods for collaborative electromagnetic computations

    NASA Astrophysics Data System (ADS)

    Soudais, Paul; Barka, André

    2006-06-01

    In this article, we describe a sub-domain method for electromagnetic computations based on boundary element method. The benefits of the sub-domain method are that the computation can be split between several companies for collaborative studies; also the computation time can be reduced by one or more orders of magnitude especially in the context of parametric studies. The accuracy and efficiency of this technique is assessed by RCS computations on an aircraft air intake with duct and rotating engine mock-up called CHANNEL. Collaborative results, obtained by combining two sets of sub-domains computed by two companies, are compared with measurements on the CHANNEL mock-up. The comparisons are made for several angular positions of the engine to show the benefits of the method for parametric studies. We also discuss the accuracy of two formulations of the sub-domain connecting scheme using edge based or modal field expansion. To cite this article: P. Soudais, A. Barka, C. R. Physique 7 (2006).

  16. From computers to cultivation: reconceptualizing evolutionary psychology

    PubMed Central

    Barrett, Louise; Pollet, Thomas V.; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on “cognitive integration” or the “extended mind hypothesis” in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human “mind-making” within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. PMID:25161633

  17. A Numerical Combination of Extended Boundary Condition Method and Invariant Imbedding Method Applied to Light Scattering by Large Spheroids and Cylinders

    NASA Technical Reports Server (NTRS)

    Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.

    2013-01-01

    The extended boundary condition method (EBCM) and invariant imbedding method (IIM) are two fundamentally different T-matrix methods for the solution of light scattering by nonspherical particles. The standard EBCM is very efficient but encounters a loss of precision when the particle size is large, the maximum size being sensitive to the particle aspect ratio. The IIM can be applied to particles in a relatively large size parameter range but requires extensive computational time due to the number of spherical layers in the particle volume discretization. A numerical combination of the EBCM and the IIM (hereafter, the EBCM+IIM) is proposed to overcome the aforementioned disadvantages of each method. Even though the EBCM can fail to obtain the T-matrix of a considered particle, it is valuable for decreasing the computational domain (i.e., the number of spherical layers) of the IIM by providing the initial T-matrix associated with an iterative procedure in the IIM. The EBCM+IIM is demonstrated to be more efficient than the IIM in obtaining the optical properties of large size parameter particles beyond the convergence limit of the EBCM. The numerical performance of the EBCM+IIM is illustrated through representative calculations in spheroidal and cylindrical particle cases.

  18. A terracing operator for physical property mapping with potential field data

    USGS Publications Warehouse

    Cordell, L.; McCafferty, A.E.

    1989-01-01

    The terracing operator works iteratively on gravity or magnetic data, using the sense of the measured field's local curvature, to produce a field comprised of uniform domains separated by abrupt domain boundaries. The result is crudely proportional to a physical-property function defined in one (profile case) or two (map case) horizontal dimensions. This result can be extended to a physical-property model if its behavior in the third (vertical) dimension is defined, either arbitrarily or on the basis of the local geologic situation. The terracing algorithm is computationally fast and appropriate to use with very large digital data sets. The terracing operator was applied separately to aeromagnetic and gravity data from a 136km x 123km area in eastern Kansas. Results provide a reasonable good physical representation of both the gravity and the aeromagnetic data. Superposition of the results from the two data sets shows many areas of agreement that can be referenced to geologic features within the buried Precambrian crystalline basement. -from Authors

  19. Integrating natural language processing and web GIS for interactive knowledge domain visualization

    NASA Astrophysics Data System (ADS)

    Du, Fangming

    Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.

  20. Structure of the kinase domain of Gilgamesh from Drosophila melanogaster

    PubMed Central

    Han, Ni; Chen, CuiCui; Shi, Zhubing; Cheng, Dianlin

    2014-01-01

    The CK1 family kinases regulate multiple cellular aspects and play important roles in Wnt/Wingless and Hedgehog signalling. The kinase domain of Drosophila Gilgamesh isoform I (Gilgamesh-I), a homologue of human CK1-γ, was purified and crystallized. Crystals of methylated Gilgamesh-I kinase domain with a D210A mutation diffracted to 2.85 Å resolution and belonged to space group P43212, with unit-cell parameters a = b = 52.025, c = 291.727 Å. The structure of Gilgamesh-I kinase domain, which was determined by molecular replacement, has conserved catalytic elements and an active conformation. Structural comparison indicates that an extended loop between the α1 helix and the β4 strand exists in the Gilgamesh-I kinase domain. This extended loop may regulate the activity and function of Gilgamesh-I. PMID:24699734

  1. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  2. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  3. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    DOT National Transportation Integrated Search

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  4. Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, A L

    2010-12-15

    As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum valuemore » available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be replaced with the 64-bit integer. that need to be replaced. In the rest of this paper we describe the techniques used to facilitate this transformation, issues raised, and issues still to be addressed. This poster will describe the reasons, methods, issues associated with extending the ALE3D code to run problems larger than 700 million elements.« less

  5. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  6. SensorWeb 3G: Extending On-Orbit Sensor Capabilities to Enable Near Realtime User Configurability

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Tran, Daniel; Davies, Ashley; Sullivan, Don; Ames, Troy; hide

    2010-01-01

    This research effort prototypes an implementation of a standard interface, Web Coverage Processing Service (WCPS), which is an Open Geospatial Consortium(OGC) standard, to enable users to define, test, upload and execute algorithms for on-orbit sensor systems. The user is able to customize on-orbit data products that result from raw data streaming from an instrument. This extends the SensorWeb 2.0 concept that was developed under a previous Advanced Information System Technology (AIST) effort in which web services wrap sensors and a standardized Extensible Markup Language (XML) based scripting workflow language orchestrates processing steps across multiple domains. SensorWeb 3G extends the concept by providing the user controls into the flight software modules associated with on-orbit sensor and thus provides a degree of flexibility which does not presently exist. The successful demonstrations to date will be presented, which includes a realistic HyspIRI decadal mission testbed. Furthermore, benchmarks that were run will also be presented along with future demonstration and benchmark tests planned. Finally, we conclude with implications for the future and how this concept dovetails into efforts to develop "cloud computing" methods and standards.

  7. PSPs and ERPs: applying the dynamics of post-synaptic potentials to individual units in simulation of temporally extended Event-Related Potential reading data.

    PubMed

    Laszlo, Sarah; Armstrong, Blair C

    2014-05-01

    The Parallel Distributed Processing (PDP) framework is built on neural-style computation, and is thus well-suited for simulating the neural implementation of cognition. However, relatively little cognitive modeling work has concerned neural measures, instead focusing on behavior. Here, we extend a PDP model of reading-related components in the Event-Related Potential (ERP) to simulation of the N400 repetition effect. We accomplish this by incorporating the dynamics of cortical post-synaptic potentials--the source of the ERP signal--into the model. Simulations demonstrate that application of these dynamics is critical for model elicitation of repetition effects in the time and frequency domains. We conclude that by advancing a neurocomputational understanding of repetition effects, we are able to posit an interpretation of their source that is both explicitly specified and mechanistically different from the well-accepted cognitive one. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective

    PubMed Central

    Mattout, Jérémie

    2012-01-01

    A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291

  9. Top-down predictions in the cognitive brain

    PubMed Central

    Kveraga, Kestutis; Ghuman, Avniel S.; Bar, Moshe

    2007-01-01

    The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, it is proposed tat the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. This review concentrates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects. These ideas are then extended to other domains. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. Connections to a host of cognitive processes will be made and implications to several mental disorders will be proposed. PMID:17923222

  10. Orbiter Flying Qualities (OFQ) Workstation user's guide

    NASA Technical Reports Server (NTRS)

    Myers, Thomas T.; Parseghian, Zareh; Hogue, Jeffrey R.

    1988-01-01

    This project was devoted to the development of a software package, called the Orbiter Flying Qualities (OFQ) Workstation, for working with the OFQ Archives which are specially selected sets of space shuttle entry flight data relevant to flight control and flying qualities. The basic approach to creation of the workstation software was to federate and extend commercial software products to create a low cost package that operates on personal computers. Provision was made to link the workstation to large computers, but the OFQ Archive files were also converted to personal computer diskettes and can be stored on workstation hard disk drives. The primary element of the workstation developed in the project is the Interactive Data Handler (IDH) which allows the user to select data subsets from the archives and pass them to specialized analysis programs. The IDH was developed as an application in a relational database management system product. The specialized analysis programs linked to the workstation include a spreadsheet program, FREDA for spectral analysis, MFP for frequency domain system identification, and NIPIP for pilot-vehicle system parameter identification. The workstation also includes capability for ensemble analysis over groups of missions.

  11. A theoretical analysis of the electromagnetic environment of the AS330 super Puma helicopter external and internal coupling

    NASA Technical Reports Server (NTRS)

    Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.

    1991-01-01

    Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.

  12. Improving Strategies via SMT Solving

    NASA Astrophysics Data System (ADS)

    Gawlitza, Thomas Martin; Monniaux, David

    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.

  13. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  14. Reconstruction of transient vibration and sound radiation of an impacted plate using time domain plane wave superposition method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Zhang, Xiao-Zheng; Bi, Chuan-Xing

    2015-05-01

    Time domain plane wave superposition method is extended to reconstruct the transient pressure field radiated by an impacted plate and the normal acceleration of the plate. In the extended method, the pressure measured on the hologram plane is expressed as a superposition of time convolutions between the time-wavenumber normal acceleration spectrum on a virtual source plane and the time domain propagation kernel relating the pressure on the hologram plane to the normal acceleration spectrum on the virtual source plane. By performing an inverse operation, the normal acceleration spectrum on the virtual source plane can be obtained by an iterative solving process, and then taken as the input to reconstruct the whole pressure field and the normal acceleration of the plate. An experiment of a clamped rectangular steel plate impacted by a steel ball is presented. The experimental results demonstrate that the extended method is effective in visualizing the transient vibration and sound radiation of an impacted plate in both time and space domains, thus providing the important information for overall understanding the vibration and sound radiation of the plate.

  15. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 2. Appendixes

    DOT National Transportation Integrated Search

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume 2 contains program listings including subroutines for the four TSC frequency domain programs described in V...

  16. The Domain Shared by Computational and Digital Ontology: A Phenomenological Exploration and Analysis

    ERIC Educational Resources Information Center

    Compton, Bradley Wendell

    2009-01-01

    The purpose of this dissertation is to explore and analyze a domain of research thought to be shared by two areas of philosophy: computational and digital ontology. Computational ontology is philosophy used to develop information systems also called computational ontologies. Digital ontology is philosophy dealing with our understanding of Being…

  17. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, G.; Siegel, M.; Tanveer, S.

    1995-09-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. The situation is disastrous for numerical computation, as small roundoff errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. Themore » method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out. 47 refs., 10 figs., 1 tab.« less

  18. From sequencing to annotating: extending the metaphor of the book of life from genetics to genomics.

    PubMed

    Hellsten, Iina

    2005-12-01

    The article discusses how the metaphor of the Book of Life was extended over time to cover the life cycle of the Human Genome Project from genetics to genomics. In particular, the focus is on the role of extendable metaphors in the debate on the Human Genome Project in three European newspapers, popular scientific journals and scientific and scholarly articles from 1990 to 2002. In these different domains of use, various parts of the metaphor were highlighted. The metaphor of Book of Life was mainly used to justify the continuation of the gene research from gene sequencing to comparative genomics. Readily extendable metaphors, such as the Book of Life, function as useful communicative tools both over time and across domains of use.

  19. PyMT: A Python package for model-coupling in the Earth sciences

    NASA Astrophysics Data System (ADS)

    Hutton, E.

    2016-12-01

    The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.

  20. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  1. Probing the Production of Amidated Peptides following Genetic and Dietary Copper Manipulations

    PubMed Central

    Yin, Ping; Bousquet-Moore, Danielle; Annangudi, Suresh P.; Southey, Bruce R.; Mains, Richard E.; Eipper, Betty A.; Sweedler, Jonathan V.

    2011-01-01

    Amidated neuropeptides play essential roles throughout the nervous and endocrine systems. Mice lacking peptidylglycine α-amidating monooxygenase (PAM), the only enzyme capable of producing amidated peptides, are not viable. In the amidation reaction, the reactant (glycine-extended peptide) is converted into a reaction intermediate (hydroxyglycine-extended peptide) by the copper-dependent peptidylglycine-α-hydroxylating monooxygenase (PHM) domain of PAM. The hydroxyglycine-extended peptide is then converted into amidated product by the peptidyl-α-hydroxyglycine α-amidating lyase (PAL) domain of PAM. PHM and PAL are stitched together in vertebrates, but separated in some invertebrates such as Drosophila and Hydra. In addition to its luminal catalytic domains, PAM includes a cytosolic domain that can enter the nucleus following release from the membrane by γ-secretase. In this work, several glycine- and hydroxyglycine-extended peptides as well as amidated peptides were qualitatively and quantitatively assessed from pituitaries of wild-type mice and mice with a single copy of the Pam gene (PAM+/−) via liquid chromatography-mass spectrometry-based methods. We provide the first evidence for the presence of a peptidyl-α-hydroxyglycine in vivo, indicating that the reaction intermediate becomes free and is not handed directly from PHM to PAL in vertebrates. Wild-type mice fed a copper deficient diet and PAM+/− mice exhibit similar behavioral deficits. While glycine-extended reaction intermediates accumulated in the PAM+/− mice and reflected dietary copper availability, amidated products were far more prevalent under the conditions examined, suggesting that the behavioral deficits observed do not simply reflect a lack of amidated peptides. PMID:22194882

  2. Different functional modes of BAR domain proteins in formation and plasticity of mammalian postsynapses.

    PubMed

    Kessels, Michael M; Qualmann, Britta

    2015-09-01

    A plethora of cell biological processes involve modulations of cellular membranes. By using extended lipid-binding interfaces, some proteins have the power to shape membranes by attaching to them. Among such membrane shapers, the superfamily of Bin-Amphiphysin-Rvs (BAR) domain proteins has recently taken center stage. Extensive structural work on BAR domains has revealed a common curved fold that can serve as an extended membrane-binding interface to modulate membrane topologies and has allowed the grouping of the BAR domain superfamily into subfamilies with structurally slightly distinct BAR domain subtypes (N-BAR, BAR, F-BAR and I-BAR). Most BAR superfamily members are expressed in the mammalian nervous system. Neurons are elaborately shaped and highly compartmentalized cells. Therefore, analyses of synapse formation and of postsynaptic reorganization processes (synaptic plasticity) - a basis for learning and memory formation - has unveiled important physiological functions of BAR domain superfamily members. These recent advances, furthermore, have revealed that the functions of BAR domain proteins include different aspects. These functions are influenced by the often complex domain organization of BAR domain proteins. In this Commentary, we review these recent insights and propose to classify BAR domain protein functions into (1) membrane shaping, (2) physical integration, (3) action through signaling components, and (4) suppression of other BAR domain functions. © 2015. Published by The Company of Biologists Ltd.

  3. Machine learning in materials informatics: recent applications and prospects

    NASA Astrophysics Data System (ADS)

    Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam; Mannodi-Kanakkithodi, Arun; Kim, Chiho

    2017-12-01

    Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists or can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as "descriptors", may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven "materials informatics" strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.

  4. Grain-size-yield stress relationship: Analysis and computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, M.A.; Benson, D.J.; Fu, H.H.

    1999-07-01

    The seminal contributions of Julia Weertman to the understanding of the mechanical properties of nanocrystalline materials will be briefly outlined. A constitutive equation predicting the effect of grain size on the yield stress of metals, based on the model proposed by M.A. Meyers and E. Ashworth, is discussed and extended to the nanocrystalline regime. At large grain sizes, it has the Hall-Petch form, and in the nanocrystalline domain the slope gradually decreases until it asymptotically approaches the flow stress of the grain boundaries. The material is envisaged as a composite, comprised of the grain interior, with flow stress {sigma}{sub fB},more » and grain boundary work-hardened layer, with flow stress {sigma}{sub fGB}. Three principal factors contribute to the grain-boundary hardening: (1) the grain boundaries act as barriers to plastic flow; (2) the grain boundaries act as dislocation sources; and (3) elastic anisotropy causes additional stresses in grain-boundary surroundings. The predictions of this model are compared with experimental measurements over the mono, micro, and nanocrystalline domains. Computational predictions are made of plastic flow as a function of grain size incorporating elastic and plastic anisotropy as well as differences of dislocation accumulation rate in grain boundary regions and grain interiors. This is the first plasticity calculation that accounts for grain size effects in a physically-based manner. 58 refs., 7 figs., 1 tab.« less

  5. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 1. Technical Report

    DOT National Transportation Integrated Search

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...

  6. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G [Albuquerque, NM

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  7. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G.

    2004-04-20

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  8. Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step

    NASA Astrophysics Data System (ADS)

    Jayakumar, J. S.; Kumar, Inder; Eswaran, V.

    2010-12-01

    Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.

  9. Pushing the P300-based brain-computer interface beyond 100 bpm: extending performance guided constraints into the temporal domain.

    PubMed

    Townsend, G; Platsko, V

    2016-04-01

    A new presentation paradigm for the P300-based brain-computer interface (BCI) referred to as the 'asynchronous paradigm' (ASP) is introduced and studied. It is based on the principle of performance guided constraints (Townsend et al 2012 Neurosci. Lett. 531 63-8) extended from the spatial domain into the temporal domain. The traditional constraint of flashing targets in predefined constant epochs of time is eliminated and targets flash asynchronously with timing based instead on constraints intended to improve performance. We propose appropriate temporal constraints to derive the ASP and compare its performance to that of the 'checkerboard paradigm' (CBP), which has previously been shown to be superior to the standard 'row/column paradigm' introduced by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). Ten participants were tested in the ASP and CBP conditions both with traditional flashing items and with flashing faces in place of the targets (see Zhang et al 2012 J. Neural Eng. 9 026018; Kaufmann and Kübler 2014 J. Neural Eng. 11 ; Chen et al 2015 J. Neurosci. Methods 239 18-27). Eleven minutes of calibration data were used as input to a stepwise linear discriminant analysis to derive classification coefficients used for online classification. Accuracy was consistently high for both paradigms (87% and 93%) while information transfer rate was 45% higher for the ASP than the CBP. In a free spelling task, one subject spelled a 66 character sentence (from a 72 item matrix) with 100% accuracy in 3 min and 24 s demonstrating a practical throughput of 120 bits per minute (bpm) with a theoretical upper bound of 258 bpm. The subject repeated the task three times in a row without error. This work represents an advance in P300 speller technology and raises the ceiling that was being reached on P300-based BCIs. Most importantly, the research presented here is a novel and effective general strategy for organising timing for flashing items. The ASP is only one possible implementation of this work since in general it can be used to describe all previous existing presentation paradigms as well as any possible new ones. This may be especially important for people with neuromuscular disabilities.

  10. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xinmai; Cleveland, Robin O.

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging. .

  11. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging.

    PubMed

    Yang, Xinmai; Cleveland, Robin O

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging.

  12. Influence of spin creepage and contact angle on curve squeal: A numerical approach

    NASA Astrophysics Data System (ADS)

    Zenzerovic, I.; Kropp, W.; Pieringer, A.

    2018-04-01

    Curve squeal is a loud tonal sound that may arise when a railway vehicle negotiates a tight curve. Due to the nonlinear nature of squeal, time-domain models provide a higher degree of accuracy in comparison to frequency-domain models and also enable the determination of squeal amplitudes. In the present paper, a previously developed engineering time-domain model for curve squeal is extended to include the effects of the contact angle and spin creepage. The extensions enable the evaluation of more realistic squeal cases with the computationally efficient model. The model validation against Kalker's variational contact model shows good agreement between the models. Results of studies on the influence of spin creepage and contact angle show that the contact angle has a significant influence on the vertical-lateral dynamics coupling and, therefore, influences both squeal amplitude and frequency. Spin creepage mainly influences processes in the contact, therefore influencing the tangential contact force amplitude. In the combined spin-contact angle study the spin creepage value is kinematically related to the contact angle value. Results indicate that the influence of the contact angle is dominant over the influence of spin creepage. In general, results indicate that the most crucial factors in squeal are those that influence the dynamics coupling: the contact angle, wheel/rail contact positions and friction.

  13. The Impact of a Ligand Binding on Strand Migration in the SAM-I Riboswitch

    PubMed Central

    Huang, Wei; Kim, Joohyun; Jha, Shantenu; Aboul-ela, Fareed

    2013-01-01

    Riboswitches sense cellular concentrations of small molecules and use this information to adjust synthesis rates of related metabolites. Riboswitches include an aptamer domain to detect the ligand and an expression platform to control gene expression. Previous structural studies of riboswitches largely focused on aptamers, truncating the expression domain to suppress conformational switching. To link ligand/aptamer binding to conformational switching, we constructed models of an S-adenosyl methionine (SAM)-I riboswitch RNA segment incorporating elements of the expression platform, allowing formation of an antiterminator (AT) helix. Using Anton, a computer specially developed for long timescale Molecular Dynamics (MD), we simulated an extended (three microseconds) MD trajectory with SAM bound to a modeled riboswitch RNA segment. Remarkably, we observed a strand migration, converting three base pairs from an antiterminator (AT) helix, characteristic of the transcription ON state, to a P1 helix, characteristic of the OFF state. This conformational switching towards the OFF state is observed only in the presence of SAM. Among seven extended trajectories with three starting structures, the presence of SAM enhances the trend towards the OFF state for two out of three starting structures tested. Our simulation provides a visual demonstration of how a small molecule (<500 MW) binding to a limited surface can trigger a large scale conformational rearrangement in a 40 kDa RNA by perturbing the Free Energy Landscape. Such a mechanism can explain minimal requirements for SAM binding and transcription termination for SAM-I riboswitches previously reported experimentally. PMID:23704854

  14. Spatially extended hybrid methods: a review

    PubMed Central

    2018-01-01

    Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past 20 years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone who requires a multiscale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field. PMID:29491179

  15. A Short Note on Rules and Higher Order Rules.

    ERIC Educational Resources Information Center

    Scandura, Joseph M.

    This brief paper argues that structural analysis--an extended form of cognitive task analysis--demonstrates that both domain dependent and domain independent knowledge can be derived from specific content domains. It is noted that the major difference between the two is that lower order rules (specific knowledge) are derived directly from specific…

  16. GrigoraSNPs: Optimized Analysis of SNPs for DNA Forensics.

    PubMed

    Ricke, Darrell O; Shcherbina, Anna; Michaleas, Adam; Fremont-Smith, Philip

    2018-04-16

    High-throughput sequencing (HTS) of single nucleotide polymorphisms (SNPs) enables additional DNA forensic capabilities not attainable using traditional STR panels. However, the inclusion of sets of loci selected for mixture analysis, extended kinship, phenotype, biogeographic ancestry prediction, etc., can result in large panel sizes that are difficult to analyze in a rapid fashion. GrigoraSNP was developed to address the allele-calling bottleneck that was encountered when analyzing SNP panels with more than 5000 loci using HTS. GrigoraSNPs uses a MapReduce parallel data processing on multiple computational threads plus a novel locus-identification hashing strategy leveraging target sequence tags. This tool optimizes the SNP calling module of the DNA analysis pipeline with runtimes that scale linearly with the number of HTS reads. Results are compared with SNP analysis pipelines implemented with SAMtools and GATK. GrigoraSNPs removes a computational bottleneck for processing forensic samples with large HTS SNP panels. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  17. Numerical simulation of strong wake/boundary layer interaction

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Victor; Piomelli, Ugo; Choudhari, Meelan M.

    2003-11-01

    DNS and LES of the strong interaction between an unsteady cylinder wake and a flat-plate boundary layer are carried out. Of the two Reynolds numbers examined, in the lower Reynolds number case (Re=385 based on cylinder diameter) the boundary layer is buffeted by the vortices shed off the cylinder, but the Reynolds number is too low to trigger transition to turbulence. In contrast, in the higher Reyolds number case (Re=1155) we observe the inception of a self-sustained turbulence-generation mechanism triggered by the Karman vortex street behind the cylinder. In previously performed simulations the computational box was not long enough to extend into the turbulent region; therefore, we have lengthened the streamwise domain using a second computational box in order to capture the transition point. In addition to examining turbulence statistics, we look at the Reynolds stress budgets up to and through the transitional regime to obtain further insights into the physics of bypass transition via wake contamination.

  18. Multi-Agent Patrolling under Uncertainty and Threats.

    PubMed

    Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D

    2015-01-01

    We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.

  19. A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.

  20. Scattering Cross Section of Sound Waves by the Modal Element Method

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1994-01-01

    #he modal element method has been employed to determine the scattered field from a plane acoustic wave impinging on a two dimensional body. In the modal element method, the scattering body is represented by finite elements, which are coupled to an eigenfunction expansion representing the acoustic pressure in the infinite computational domain surrounding the body. The present paper extends the previous work by developing the algorithm necessary to calculate the acoustics scattering cross section by the modal element method. The scattering cross section is the acoustical equivalent to the Radar Cross Section (RCS) in electromagnetic theory. Since the scattering cross section is evaluated at infinite distance from the body, an asymptotic approximation is used in conjunction with the standard modal element method. For validation, the scattering cross section of the rigid circular cylinder is computed for the frequency range 0.1 is less than or equal to ka is less than or equal to 100. Results show excellent agreement with the analytic solution.

  1. 3D multiscale crack propagation using the XFEM applied to a gas turbine blade

    NASA Astrophysics Data System (ADS)

    Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund

    2014-01-01

    This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.

  2. Structure Prediction of the Second Extracellular Loop in G-Protein-Coupled Receptors

    PubMed Central

    Kmiecik, Sebastian; Jamroz, Michal; Kolinski, Michal

    2014-01-01

    G-protein-coupled receptors (GPCRs) play key roles in living organisms. Therefore, it is important to determine their functional structures. The second extracellular loop (ECL2) is a functionally important region of GPCRs, which poses significant challenge for computational structure prediction methods. In this work, we evaluated CABS, a well-established protein modeling tool for predicting ECL2 structure in 13 GPCRs. The ECL2s (with between 13 and 34 residues) are predicted in an environment of other extracellular loops being fully flexible and the transmembrane domain fixed in its x-ray conformation. The modeling procedure used theoretical predictions of ECL2 secondary structure and experimental constraints on disulfide bridges. Our approach yielded ensembles of low-energy conformers and the most populated conformers that contained models close to the available x-ray structures. The level of similarity between the predicted models and x-ray structures is comparable to that of other state-of-the-art computational methods. Our results extend other studies by including newly crystallized GPCRs. PMID:24896119

  3. Beyond Mechanism Design

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Turner, Kagan

    2004-01-01

    The field of mechanism design is concerned with setting (incentives superimposed on) the utility functions of a group of players so as to induce desirable joint behavior of those players. It arose in the context of traditional equilibrium game theory applied to games involving human players. This has led it to have many implicit restrictions, which strongly limits its scope. In particular, it ignores many issues that are crucial for systems that are large (and therefore far off-equilibrium in general) and/or composed of non-human players (e.g., computer-based agents). This also means it has concentrated on issues that are often irrelevant in those broader domains (e.g., incentive compatibility). This paper illustrates these shortcomings by reviewing some of the recent theoretical work on the design of collectives, a body of work that constitutes a substantial broadening of mechanism design. It then presents computer experiments based on a recently suggested nanotechnology testbed that demonstrates the power of that extended version of mechanism design.

  4. Structural and Biochemical Studies of ALIX/AlP1 and Its Role in Retrovirus Budding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher,R.; Chung, H.; Zhai, Q.

    2007-01-01

    ALIX/AIP1 functions in enveloped virus budding, endosomal protein sorting, and many other cellular processes. Retroviruses, including HIV-1, SIV, and EIAV, bind and recruit ALIX through YPXnL late-domain motifs (X = any residue; n = 1-3). Crystal structures reveal that human ALIX is composed of an N-terminal Bro1 domain and a central domain that is composed of two extended three-helix bundles that form elongated arms that fold back into a 'V.'. The structures also reveal conformational flexibility in the arms that suggests that the V domain may act as a flexible hinge in response to ligand binding. YPXnL late domains bindmore » in a conserved hydrophobic pocket on the second arm near the apex of the V, whereas CHMP4/ESCRT-III proteins bind a conserved hydrophobic patch on the Bro1 domain, and both interactions are required for virus budding. ALIX therefore serves as a flexible, extended scaffold that connects retroviral Gag proteins to ESCRT-III and other cellular-budding machinery.« less

  5. A statistical package for computing time and frequency domain analysis

    NASA Technical Reports Server (NTRS)

    Brownlow, J.

    1978-01-01

    The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.

  6. Skierarchy: Extending the Power of Crowdsourcing Using a Hierarchy of Domain Experts, Crowd and Machine Learning

    DTIC Science & Technology

    2012-11-01

    college background and good reading comprehension skills in English, and bring to them to the office space to work for us full-time on micro-tasks. This...reasonable reading comprehension skills in English. The expert spent only 1/3rd the time as each member of the crowd in the entire annotation process...3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Skierarchy: Extending the Power of Crowdsourcing Using a Hierarchy of Domain

  7. Systematic analysis of human kinase genes: a large number of genes and alternative splicing events result in functional and structural diversity

    PubMed Central

    Milanesi, Luciano; Petrillo, Mauro; Sepe, Leandra; Boccia, Angelo; D'Agostino, Nunzio; Passamano, Myriam; Di Nardo, Salvatore; Tasco, Gianluca; Casadio, Rita; Paolella, Giovanni

    2005-01-01

    Background Protein kinases are a well defined family of proteins, characterized by the presence of a common kinase catalytic domain and playing a significant role in many important cellular processes, such as proliferation, maintenance of cell shape, apoptosys. In many members of the family, additional non-kinase domains contribute further specialization, resulting in subcellular localization, protein binding and regulation of activity, among others. About 500 genes encode members of the kinase family in the human genome, and although many of them represent well known genes, a larger number of genes code for proteins of more recent identification, or for unknown proteins identified as kinase only after computational studies. Results A systematic in silico study performed on the human genome, led to the identification of 5 genes, on chromosome 1, 11, 13, 15 and 16 respectively, and 1 pseudogene on chromosome X; some of these genes are reported as kinases from NCBI but are absent in other databases, such as KinBase. Comparative analysis of 483 gene regions and subsequent computational analysis, aimed at identifying unannotated exons, indicates that a large number of kinase may code for alternately spliced forms or be incorrectly annotated. An InterProScan automated analysis was perfomed to study domain distribution and combination in the various families. At the same time, other structural features were also added to the annotation process, including the putative presence of transmembrane alpha helices, and the cystein propensity to participate into a disulfide bridge. Conclusion The predicted human kinome was extended by identifiying both additional genes and potential splice variants, resulting in a varied panorama where functionality may be searched at the gene and protein level. Structural analysis of kinase proteins domains as defined in multiple sources together with transmembrane alpha helices and signal peptide prediction provides hints to function assignment. The results of the human kinome analysis are collected in the KinWeb database, available for browsing and searching over the internet, where all results from the comparative analysis and the gene structure annotation are made available, alongside the domain information. Kinases may be searched by domain combinations and the relative genes may be viewed in a graphic browser at various level of magnification up to gene organization on the full chromosome set. PMID:16351747

  8. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  9. On buffer layers as non-reflecting computational boundaries

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli L.

    1996-01-01

    We examine an absorbing buffer layer technique for use as a non-reflecting boundary condition in the numerical simulation of flows. One such formulation was by Ta'asan and Nark for the linearized Euler equations. They modified the flow inside the buffer zone to artificially make it supersonic in the layer. We examine how this approach can be extended to the nonlinear Euler equations. We consider both a conservative and a non-conservative form modifying the governing equations in the buffer layer. We compare this with the case that the governing equations in the layer are the same as in the interior domain. We test the effectiveness of these buffer layers by a simulation of an excited axisymmetric jet based on a nonlinear compressible Navier-Stokes equations.

  10. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  11. The Cerebellum: Adaptive Prediction for Movement and Cognition

    PubMed Central

    Sokolov, Arseny A.; Miall, R. Chris; Ivry, Richard B.

    2017-01-01

    Over the past 30 years, cumulative evidence has indicated that cerebellar function extends beyond sensorimotor control. This view has emerged from studies of neuroanatomy, neuroimaging, neuropsychology and brain stimulation, with the results implicating the cerebellum in domains as diverse as attention, language, executive function and social cognition. Although the literature provides sophisticated models of how the cerebellum helps refine movements, it remains unclear how the core mechanisms of these models can be applied when considering a broader conceptualization of cerebellar function. In light of recent multidisciplinary findings, we consider two key concepts that have been suggested as general computational principles of cerebellar function, prediction and error-based learning, examining how these might be relevant in the operation of cognitive cerebro-cerebellar loops. PMID:28385461

  12. Multidisciplinary model-based-engineering for laser weapon systems: recent progress

    NASA Astrophysics Data System (ADS)

    Coy, Steve; Panthaki, Malcolm

    2013-09-01

    We are working to develop a comprehensive, integrated software framework and toolset to support model-based engineering (MBE) of laser weapons systems. MBE has been identified by the Office of the Director, Defense Science and Engineering as one of four potentially "game-changing" technologies that could bring about revolutionary advances across the entire DoD research and development and procurement cycle. To be effective, however, MBE requires robust underlying modeling and simulation technologies capable of modeling all the pertinent systems, subsystems, components, effects, and interactions at any level of fidelity that may be required in order to support crucial design decisions at any point in the system development lifecycle. Very often the greatest technical challenges are posed by systems involving interactions that cut across two or more distinct scientific or engineering domains; even in cases where there are excellent tools available for modeling each individual domain, generally none of these domain-specific tools can be used to model the cross-domain interactions. In the case of laser weapons systems R&D these tools need to be able to support modeling of systems involving combined interactions among structures, thermal, and optical effects, including both ray optics and wave optics, controls, atmospheric effects, target interaction, computational fluid dynamics, and spatiotemporal interactions between lasing light and the laser gain medium. To address this problem we are working to extend Comet™, to add the addition modeling and simulation capabilities required for this particular application area. In this paper we will describe our progress to date.

  13. Medical informatics--an Australian perspective.

    PubMed

    Hannan, T

    1991-06-01

    Computers, like the X-ray and stethoscope can be seen as clinical tools, that provide physicians with improved expertise in solving patient management problems. As tools they enable us to extend our clinical information base, and they also provide facilities that improve the delivery of the health care we provide. Automation (computerisation) in the health domain will cause the computer to become a more integral part of health care management and delivery before the start of the next century. To understand how the computer assists those who deliver and manage health care, it is important to be aware of its functional capabilities and how we can use them in medical practice. The rapid technological advances in computers over the last two decades has had both beneficial and counterproductive effects on the implementation of effective computer applications in the delivery of health care. For example, in the 1990s the computer hobbyist is able to make an investment of less than $10,000 on computer hardware that will match or exceed the technological capacities of machines of the 1960s. These rapid technological advances, which have produced a quantum leap in our ability to store and process information, have tended to make us overlook the need for effective computer programmes which will meet the needs of patient care. As the 1990s begin, those delivering health care (eg, physicians, nurses, pharmacists, administrators ...) need to become more involved in directing the effective implementation of computer applications that will provide the tools for improved information management, knowledge processing, and ultimately better patient care.

  14. Tectono-sedimentary evolution of the eastern Gulf of Aden conjugate passive margins: Narrowness and asymmetry in oblique rifting context

    NASA Astrophysics Data System (ADS)

    Nonn, Chloé; Leroy, Sylvie; Khanbari, Khaled; Ahmed, Abdulhakim

    2017-11-01

    Here, we focus on the yet unexplored eastern Gulf of Aden, on Socotra Island (Yemen), Southeastern Oman and offshore conjugate passive margins between the Socotra-Hadbeen (SHFZ) and the eastern Gulf of Aden fracture zones. Our interpretation leads to onshore-offshore stratigraphic correlation between the passive margins. We present a new map reflecting the boundaries between the crustal domains (proximal, necking, hyper-extended, exhumed mantle, proto-oceanic and oceanic domains) and structures using bathymetry, magnetic surveys and seismic reflection data. The most striking result is that the magma-poor conjugate margins exhibit asymmetrical architecture since the thinning phase (Upper Rupelian-Burdigalian). Their necking domains are sharp ( 40-10 km wide) and their hyper-extended domains are narrow and asymmetric ( 10-40 km wide on the Socotra margin and 50-80 km wide on the Omani margin). We suggest that this asymmetry is related to the migration of the rift center producing significant lower crustal flow and sequential faulting in the hyper-extended domain. Throughout the Oligo-Miocene rifting, far-field forces dominate and the deformation is accommodated along EW to N110°E northward-dipping low angle normal faults. Convection in the mantle near the SHFZ may be responsible of change in fault dip polarity in the Omani hyper-extended domain. We show the existence of a northward-dipping detachment fault formed at the beginning of the exhumation phase (Burdigalien). It separates the northern upper plate (Oman) from southern lower plate (Socotra Island) and may have generated rift-induced decompression melting and volcanism affecting the upper plate. We highlight multiple generations of detachment faults exhuming serpentinized subcontinental mantle in the ocean-continent transition. Associated to significant decompression melting, final detachment fault may have triggered the formation of a proto-oceanic crust at 17.6 Ma and induced late volcanism up to 10 Ma. Finally, the setting up of a steady-state oceanic spreading center occurs at 17 Ma.

  15. Mathematical and computational studies of equilibrium capillary free surfaces

    NASA Technical Reports Server (NTRS)

    Albright, N.; Chen, N. F.; Concus, P.; Finn, R.

    1977-01-01

    The results of several independent studies are presented. The general question is considered of whether a wetting liquid always rises higher in a small capillary tube than in a larger one, when both are dipped vertically into an infinite reservoir. An analytical investigation is initiated to determine the qualitative behavior of the family of solutions of the equilibrium capillary free-surface equation that correspond to rotationally symmetric pendent liquid drops and the relationship of these solutions to the singular solution, which corresponds to an infinite spike of liquid extending downward to infinity. The block successive overrelaxation-Newton method and the generalized conjugate gradient method are investigated for solving the capillary equation on a uniform square mesh in a square domain, including the case for which the solution is unbounded at the corners. Capillary surfaces are calculated on the ellipse, on a circle with reentrant notches, and on other irregularly shaped domains using JASON, a general purpose program for solving nonlinear elliptic equations on a nonuniform quadrilaterial mesh. Analytical estimates for the nonexistence of solutions of the equilibrium capillary free-surface equation on the ellipse in zero gravity are evaluated.

  16. Prediction of submarine scattered noise by the acoustic analogy

    NASA Astrophysics Data System (ADS)

    Testa, C.; Greco, L.

    2018-07-01

    The prediction of the noise scattered by a submarine subject to the propeller tonal noise is here addressed through a non-standard frequency-domain formulation that extends the use of the acoustic analogy to scattering problems. A boundary element method yields the scattered pressure upon the hull surface by the solution of a boundary integral equation, whereas the noise radiated in the fluid domain is evaluated by the corresponding boundary integral representation. Propeller-induced incident pressure field on the scatterer is detected by combining an unsteady three-dimensional panel method with the Bernoulli equation. For each frequency of interest, numerical results concern with sound pressure levels upon the hull and in the flowfield. The validity of the results is established by a comparison with a time-marching hydrodynamic panel method that solves propeller and hull jointly. Within the framework of potential-flow hydrodynamics, it is found out that the scattering formulation herein proposed is appropriate to successfully capture noise magnitude and directivity both on the hull surface and in the flowfield, yielding a computationally efficient solution procedure that may be useful in preliminary design/multidisciplinary optimization applications.

  17. Impulse response of a two-dimensional rough surface overlying an inhomogeneous, nondispersive medium: A hybrid model

    NASA Astrophysics Data System (ADS)

    Keiffer, Richard; Novarini, Jorge; Norton, Guy

    2002-11-01

    A numerical model to calculate the impulse response of a two-dimensional, impenetrable, rough surface directly in the time domain has been recently introduced [R. S. Keiffer and J. C. Novarini, J. Acoust. Soc. Am. 107, 27-39 (2000)]. This model is based on wedge diffraction theory and assumes the half-space containing the source and receiver is homogeneous. In this work, the model is extended to handle media where the index of refraction varies with depth by merging the scattering model with a ray-based propagation model. The resulting hybrid model is tested against a finite-difference time-domain (FDTD) method for backscattering from a corrugated surface in the presence of a refractive layer. This new model can be applied, for example, to calculate acoustic reverberation from the sea surface in cases where the water mass is inhomogeneous and dispersion is negligible. [Work supported by ONR/NRL (PE 61153N-32) and by grants of computer time DoD HPC Shared Resource Center at Stennis Space Center, MS.

  18. Heat Transfer on a Film-Cooled Blade - Effect of Hole Physics

    NASA Technical Reports Server (NTRS)

    Garg, Vijay K.; Rigby, David L.

    1998-01-01

    A multi-block, three-dimensional Navier-Stokes code has been used to study the within-hole and near-hole physics in relation to heat transfer on a film-cooled blade. The flow domain consists of the coolant flow through the plenum and hole-pipes for the three staggered rows of shower-head holes on the VK1 rotor, and the main flow over the blade. A multi-block grid is generated that is nearly orthogonal to the various surfaces. It may be noted that for the VK1 rotor the shower-head holes are inclined at 30 deg. to the spanwise direction, and are normal to the streamwise direction on the blade. Wilcox's k-omega turbulence model is used. The present study provides a much better comparison for the heat transfer coefficient at the blade mid-span with the experimental data than an earlier analysis wherein coolant velocity and temperature distributions were specified at the hole exits rather than extending the computational domain into the hole-pipe and plenum. Details of the distributions of coolant velocity, temperature, k and omega at the hole exits are also presented.

  19. Fluid Structure Interaction simulation of heart prosthesis in patient-specific left-ventricle/aorta anatomies

    NASA Astrophysics Data System (ADS)

    Le, Trung; Borazjani, Iman; Sotiropoulos, Fotis

    2009-11-01

    In order to test and optimize heart valve prosthesis and enable virtual implantation of other biomedical devices it is essential to develop and validate high-resolution FSI-CFD codes for carrying out simulations in patient-specific geometries. We have developed a powerful numerical methodology for carrying out FSI simulations of cardiovascular flows based on the CURVIB approach (Borazjani, L. Ge, and F. Sotiropoulos, Journal of Computational physics, vol. 227, pp. 7587-7620 2008). We have extended our FSI method to overset grids to handle efficiently more complicated geometries e.g. simulating an MHV implanted in an anatomically realistic aorta and left-ventricle. A compliant, anatomic left-ventricle is modeled using prescribed motion in one domain. The mechanical heart valve is placed inside the second domain i.e. the body-fitted curvilinear mesh of the anatomic aorta. The simulations of an MHV with a left-ventricle model underscore the importance of inflow conditions and ventricular compliance for such simulations and demonstrate the potential of our method as a powerful tool for patient-specific simulations.

  20. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  1. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  2. Low resolution solution structure of HAMLET and the importance of its alpha-domains in tumoricidal activity.

    PubMed

    Ho, C S James; Rydstrom, Anna; Manimekalai, Malathy Sony Subramanian; Svanborg, Catharina; Grüber, Gerhard

    2012-01-01

    HAMLET (Human Alpha-lactalbumin Made LEthal to Tumor cells) is the first member in a new family of protein-lipid complexes with broad tumoricidal activity. Elucidating the molecular structure and the domains crucial for HAMLET formation is fundamental for understanding its tumoricidal function. Here we present the low-resolution solution structure of the complex of oleic acid bound HAMLET, derived from small angle X-ray scattering data. HAMLET shows a two-domain conformation with a large globular domain and an extended part of about 2.22 nm in length and 1.29 nm width. The structure has been superimposed into the related crystallographic structure of human α-lactalbumin, revealing that the major part of α-lactalbumin accommodates well in the shape of HAMLET. However, the C-terminal residues from L105 to L123 of the crystal structure of the human α-lactalbumin do not fit well into the HAMLET structure, resulting in an extended conformation in HAMLET, proposed to be required to form the tumoricidal active HAMLET complex with oleic acid. Consistent with this low resolution structure, we identified biologically active peptide epitopes in the globular as well as the extended domains of HAMLET. Peptides covering the alpha1 and alpha2 domains of the protein triggered rapid ion fluxes in the presence of sodium oleate and were internalized by tumor cells, causing rapid and sustained changes in cell morphology. The alpha peptide-oleate bound forms also triggered tumor cell death with comparable efficiency as HAMLET. In addition, shorter peptides corresponding to those domains are biologically active. These findings provide novel insights into the structural prerequisites for the dramatic effects of HAMLET on tumor cells.

  3. Low Resolution Solution Structure of HAMLET and the Importance of Its Alpha-Domains in Tumoricidal Activity

    PubMed Central

    Ho CS, James; Rydstrom, Anna; Manimekalai, Malathy Sony Subramanian; Svanborg, Catharina; Grüber, Gerhard

    2012-01-01

    HAMLET (Human Alpha-lactalbumin Made LEthal to Tumor cells) is the first member in a new family of protein-lipid complexes with broad tumoricidal activity. Elucidating the molecular structure and the domains crucial for HAMLET formation is fundamental for understanding its tumoricidal function. Here we present the low-resolution solution structure of the complex of oleic acid bound HAMLET, derived from small angle X-ray scattering data. HAMLET shows a two-domain conformation with a large globular domain and an extended part of about 2.22 nm in length and 1.29 nm width. The structure has been superimposed into the related crystallographic structure of human α-lactalbumin, revealing that the major part of α-lactalbumin accommodates well in the shape of HAMLET. However, the C-terminal residues from L105 to L123 of the crystal structure of the human α-lactalbumin do not fit well into the HAMLET structure, resulting in an extended conformation in HAMLET, proposed to be required to form the tumoricidal active HAMLET complex with oleic acid. Consistent with this low resolution structure, we identified biologically active peptide epitopes in the globular as well as the extended domains of HAMLET. Peptides covering the alpha1 and alpha2 domains of the protein triggered rapid ion fluxes in the presence of sodium oleate and were internalized by tumor cells, causing rapid and sustained changes in cell morphology. The alpha peptide-oleate bound forms also triggered tumor cell death with comparable efficiency as HAMLET. In addition, shorter peptides corresponding to those domains are biologically active. These findings provide novel insights into the structural prerequisites for the dramatic effects of HAMLET on tumor cells. PMID:23300861

  4. Further development of the dynamic gas temperature measurement system. Volume 2: Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Stocks, Dana R.

    1986-01-01

    The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.

  5. Multiclass Continuous Correspondence Learning

    NASA Technical Reports Server (NTRS)

    Bue, Brian D,; Thompson, David R.

    2011-01-01

    We extend the Structural Correspondence Learning (SCL) domain adaptation algorithm of Blitzer er al. to the realm of continuous signals. Given a set of labeled examples belonging to a 'source' domain, we select a set of unlabeled examples in a related 'target' domain that play similar roles in both domains. Using these 'pivot samples, we map both domains into a common feature space, allowing us to adapt a classifier trained on source examples to classify target examples. We show that when between-class distances are relatively preserved across domains, we can automatically select target pivots to bring the domains into correspondence.

  6. Determining mode excitations of vacuum electronics devices via three-dimensional simulations using the SOS code

    NASA Technical Reports Server (NTRS)

    Warren, Gary

    1988-01-01

    The SOS code is used to compute the resonance modes (frequency-domain information) of sample devices and separately to compute the transient behavior of the same devices. A code, DOT, is created to compute appropriate dot products of the time-domain and frequency-domain results. The transient behavior of individual modes in the device is then plotted. Modes in a coupled-cavity traveling-wave tube (CCTWT) section excited beam in separate simulations are analyzed. Mode energy vs. time and mode phase vs. time are computed and it is determined whether the transient waves are forward or backward waves for each case. Finally, the hot-test mode frequencies of the CCTWT section are computed.

  7. A compositional reservoir simulator on distributed memory parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less

  8. A UML profile for the OBO relation ontology

    PubMed Central

    2012-01-01

    Background Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions The use of an established and well-known graphical language in the development of biomedical ontologies provides a more intuitive form of capturing and representing knowledge than using only text-based notations. The use of the profile requires the domain expert to reason about the underlying semantics of the concepts and relationships being modeled, which helps preventing the introduction of inconsistencies in an ontology under development and facilitates the identification and correction of errors in an already defined ontology. PMID:23095840

  9. A robust upscaling of the effective particle deposition rate in porous media

    NASA Astrophysics Data System (ADS)

    Boccardo, Gianluca; Crevacore, Eleonora; Sethi, Rajandrea; Icardi, Matteo

    2018-05-01

    In the upscaling from pore to continuum (Darcy) scale, reaction and deposition phenomena at the solid-liquid interface of a porous medium have to be represented by macroscopic reaction source terms. The effective rates can be computed, in the case of periodic media, from three-dimensional microscopic simulations of the periodic cell. Several computational and semi-analytical models have been studied in the field of colloid filtration to describe this problem. They typically rely on effective deposition rates defined by complex fitting procedures, neglecting the advection-diffusion interplay, the pore-scale flow complexity, and assuming slow reactions (or large Péclet numbers). Therefore, when these rates are inserted into general macroscopic transport equations, they can lead to several conceptual inconsistencies and significant errors. To study more accurately the dependence of deposition on the flow parameters, in this work we advocate a clear distinction between the surface processes (that altogether defines the so-called attachment efficiency), and the pore-scale processes. With this approach, valid when colloidal particles are small enough, we study Brownian and gravity-driven deposition on a face-centred cubic (FCC) arrangement of spherical grains, and define a robust upscaling based on a linear effective reaction rate. The case of partial deposition, defined by an attachment probability, is studied and the limit of perfect sink is retrieved as a particular case. We introduce a novel upscaling approach and a particularly convenient computational setup that allows the direct computation of the asymptotic stationary value of effective rates. This allows to drastically reduce the computational domain down to the scale of the single repeating periodic unit. The savings are ever more noticeable in the case of higher Péclet numbers, when larger physical times are needed to reach the asymptotic regime and thus, equivalently, much larger computational domain and simulation time would be needed in a traditional setup. We show how this new definition of deposition rate is more robust and extendable to the whole range of Péclet numbers; it also is consistent with the classical heat and mass transfer literature.

  10. Pivotal role of extended linker 2 in the activation of Gα by G protein-coupled receptor.

    PubMed

    Huang, Jianyun; Sun, Yutong; Zhang, J Jillian; Huang, Xin-Yun

    2015-01-02

    G protein-coupled receptors (GPCRs) relay extracellular signals mainly to heterotrimeric G-proteins (Gαβγ) and they are the most successful drug targets. The mechanisms of G-protein activation by GPCRs are not well understood. Previous studies have revealed a signal relay route from a GPCR via the C-terminal α5-helix of Gα to the guanine nucleotide-binding pocket. Recent structural and biophysical studies uncover a role for the opening or rotating of the α-helical domain of Gα during the activation of Gα by a GPCR. Here we show that β-adrenergic receptors activate eight Gαs mutant proteins (from a screen of 66 Gαs mutants) that are unable to bind Gβγ subunits in cells. Five of these eight mutants are in the αF/Linker 2/β2 hinge region (extended Linker 2) that connects the Ras-like GTPase domain and the α-helical domain of Gαs. This extended Linker 2 is the target site of a natural product inhibitor of Gq. Our data show that the extended Linker 2 is critical for Gα activation by GPCRs. We propose that a GPCR via its intracellular loop 2 directly interacts with the β2/β3 loop of Gα to communicate to Linker 2, resulting in the opening and closing of the α-helical domain and the release of GDP during G-protein activation. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  12. Atomistic modelling of scattering data in the Collaborative Computational Project for Small Angle Scattering (CCP-SAS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, Stephen J.; Wright, David W.; Zhang, Hailiang

    2016-10-14

    The capabilities of current computer simulations provide a unique opportunity to model small-angle scattering (SAS) data at the atomistic level, and to include other structural constraints ranging from molecular and atomistic energetics to crystallography, electron microscopy and NMR. This extends the capabilities of solution scattering and provides deeper insights into the physics and chemistry of the systems studied. Realizing this potential, however, requires integrating the experimental data with a new generation of modelling software. To achieve this, the CCP-SAS collaboration (http://www.ccpsas.org/) is developing open-source, high-throughput and user-friendly software for the atomistic and coarse-grained molecular modelling of scattering data. Robust state-of-the-artmore » molecular simulation engines and molecular dynamics and Monte Carlo force fields provide constraints to the solution structure inferred from the small-angle scattering data, which incorporates the known physical chemistry of the system. The implementation of this software suite involves a tiered approach in whichGenAppprovides the deployment infrastructure for running applications on both standard and high-performance computing hardware, andSASSIEprovides a workflow framework into which modules can be plugged to prepare structures, carry out simulations, calculate theoretical scattering data and compare results with experimental data.GenAppproduces the accessible web-based front end termedSASSIE-web, andGenAppandSASSIEalso make community SAS codes available. Applications are illustrated by case studies: (i) inter-domain flexibility in two- to six-domain proteins as exemplified by HIV-1 Gag, MASP and ubiquitin; (ii) the hinge conformation in human IgG2 and IgA1 antibodies; (iii) the complex formed between a hexameric protein Hfq and mRNA; and (iv) synthetic `bottlebrush' polymers.« less

  13. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  14. Atomistic modelling of scattering data in the Collaborative Computational Project for Small Angle Scattering (CCP-SAS).

    PubMed

    Perkins, Stephen J; Wright, David W; Zhang, Hailiang; Brookes, Emre H; Chen, Jianhan; Irving, Thomas C; Krueger, Susan; Barlow, David J; Edler, Karen J; Scott, David J; Terrill, Nicholas J; King, Stephen M; Butler, Paul D; Curtis, Joseph E

    2016-12-01

    The capabilities of current computer simulations provide a unique opportunity to model small-angle scattering (SAS) data at the atomistic level, and to include other structural constraints ranging from molecular and atomistic energetics to crystallography, electron microscopy and NMR. This extends the capabilities of solution scattering and provides deeper insights into the physics and chemistry of the systems studied. Realizing this potential, however, requires integrating the experimental data with a new generation of modelling software. To achieve this, the CCP-SAS collaboration (http://www.ccpsas.org/) is developing open-source, high-throughput and user-friendly software for the atomistic and coarse-grained molecular modelling of scattering data. Robust state-of-the-art molecular simulation engines and molecular dynamics and Monte Carlo force fields provide constraints to the solution structure inferred from the small-angle scattering data, which incorporates the known physical chemistry of the system. The implementation of this software suite involves a tiered approach in which GenApp provides the deployment infrastructure for running applications on both standard and high-performance computing hardware, and SASSIE provides a workflow framework into which modules can be plugged to prepare structures, carry out simulations, calculate theoretical scattering data and compare results with experimental data. GenApp produces the accessible web-based front end termed SASSIE-web , and GenApp and SASSIE also make community SAS codes available. Applications are illustrated by case studies: (i) inter-domain flexibility in two- to six-domain proteins as exemplified by HIV-1 Gag, MASP and ubiquitin; (ii) the hinge conformation in human IgG2 and IgA1 antibodies; (iii) the complex formed between a hexameric protein Hfq and mRNA; and (iv) synthetic 'bottlebrush' polymers.

  15. Circuit transients due to negative bias arcs-II. [on solar cell power systems in low earth orbit

    NASA Technical Reports Server (NTRS)

    Metz, R. N.

    1986-01-01

    Two new models of negative-bias arcing on a solar cell power system in Low Earth Orbit are presented. One is an extended, analytical model and the other is a non-linear, numerical model. The models are based on an earlier analytical model in which the interactions between solar cell interconnects and the space plasma as well as the parameters of the power circuit are approximated linearly. Transient voltages due to arcs struck at the negative thermal of the solar panel are calculated in the time domain. The new models treat, respectively, further linear effects within the solar panel load circuit and non-linear effects associated with the plasma interactions. Results of computer calculations with the models show common-mode voltage transients of the electrically floating solar panel struck by an arc comparable to the early model but load transients that differ substantially from the early model. In particular, load transients of the non-linear model can be more than twice as great as those of the early model and more than twenty times as great as the extended, linear model.

  16. Dynamic train-turnout interaction in an extended frequency range using a detailed model of track dynamics

    NASA Astrophysics Data System (ADS)

    Kassa, Elias; Nielsen, Jens C. O.

    2009-03-01

    A time domain solution method for general three-dimensional dynamic interaction of train and turnout (switch and crossing) that accounts for excitation in an extended frequency range (up to several hundred Hz) is proposed. Based on a finite element (FE) model of a standard turnout design, a complex-valued modal superposition of track dynamics is applied using the first 500 eigenmodes of the turnout model. The three-dimensional model includes the distribution of structural flexibility along the turnout, such as bending and torsion of rails and sleepers, and the variations in rail cross-section and sleeper length. Convergence of simulation results is studied while using an increasing number of eigenmodes. It is shown that modes with eigenfrequencies up to at least 200 Hz have a significant influence on the magnitudes of the wheel-rail contact forces. Results from using a simplified track model with a commercial computer program for low-frequency vehicle dynamics are compared with the results from using the detailed FE model in conjunction with the proposed method.

  17. Proceedings of the Sixth NASA Langley Formal Methods (LFM) Workshop

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Yvonne (Editor)

    2008-01-01

    Today's verification techniques are hard-pressed to scale with the ever-increasing complexity of safety critical systems. Within the field of aeronautics alone, we find the need for verification of algorithms for separation assurance, air traffic control, auto-pilot, Unmanned Aerial Vehicles (UAVs), adaptive avionics, automated decision authority, and much more. Recent advances in formal methods have made verifying more of these problems realistic. Thus we need to continually re-assess what we can solve now and identify the next barriers to overcome. Only through an exchange of ideas between theoreticians and practitioners from academia to industry can we extend formal methods for the verification of ever more challenging problem domains. This volume contains the extended abstracts of the talks presented at LFM 2008: The Sixth NASA Langley Formal Methods Workshop held on April 30 - May 2, 2008 in Newport News, Virginia, USA. The topics of interest that were listed in the call for abstracts were: advances in formal verification techniques; formal models of distributed computing; planning and scheduling; automated air traffic management; fault tolerance; hybrid systems/hybrid automata; embedded systems; safety critical applications; safety cases; accident/safety analysis.

  18. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  19. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  20. Biomimicry of symbiotic multi-species coevolution for discrete and continuous optimization in RFID networks.

    PubMed

    Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan

    2017-03-01

    In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.

  1. Performance analysis of FET microwave devices by use of extended spectral-element time-domain method

    NASA Astrophysics Data System (ADS)

    Sheng, Yijun; Xu, Kan; Wang, Daoxiang; Chen, Rushan

    2013-05-01

    The extended spectral-element time-domain (SETD) method is employed to analyse field effect transistor (FET) microwave devices. In order to impose the contribution of the FET microwave devices into the electromagnetic simulation, the SETD method is extended by introducing a lumped current term into the vector Helmholtz equation. The change of currents on each lumped component can be expressed by the change of voltage via corresponding models of equivalent circuit. The electric fields around the lumped component must be influenced by the change of voltage on each lumped component, and vice versa. So a global coupling about the EM-circuit can be built directly. The fully explicit solving scheme is maintained in this extended SETD method and the CPU time can be saved spontaneously. Three practical FET microwave devices are analysed in this article. The numerical results demonstrate the ability and accuracy of this method.

  2. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  3. Influence of computational domain size on the pattern formation of the phase field crystals

    NASA Astrophysics Data System (ADS)

    Starodumov, Ilya; Galenko, Peter; Alexandrov, Dmitri; Kropotin, Nikolai

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) represents one of the important directions of modern computational materials science. This method makes it possible to research the formation of stable or metastable crystal structures. In this paper, we study the effect of computational domain size on the crystal pattern formation obtained as a result of computer simulation by the PFC method. In the current report, we show that if the size of a computational domain is changed, the result of modeling may be a structure in metastable phase instead of pure stable state. The authors present a possible theoretical justification for the observed effect and provide explanations on the possible modification of the PFC method to account for this phenomenon.

  4. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  5. Genetic influences on heart rate variability

    PubMed Central

    Golosheykin, Simon; Grant, Julia D.; Novak, Olga V.; Heath, Andrew C.; Anokhin, Andrey P.

    2016-01-01

    Heart rate variability (HRV) is the variation of cardiac inter-beat intervals over time resulting largely from the interplay between the sympathetic and parasympathetic branches of the autonomic nervous system. Individual differences in HRV are associated with emotion regulation, personality, psychopathology, cardiovascular health, and mortality. Previous studies have shown significant heritability of HRV measures. Here we extend genetic research on HRV by investigating sex differences in genetic underpinnings of HRV, the degree of genetic overlap among different measurement domains of HRV, and phenotypic and genetic relationships between HRV and the resting heart rate (HR). We performed electrocardiogram (ECG) recordings in a large population-representative sample of young adult twins (n = 1060 individuals) and computed HRV measures from three domains: time, frequency, and nonlinear dynamics. Genetic and environmental influences on HRV measures were estimated using linear structural equation modeling of twin data. The results showed that variability of HRV and HR measures can be accounted for by additive genetic and non-shared environmental influences (AE model), with no evidence for significant shared environmental effects. Heritability estimates ranged from 47 to 64%, with little difference across HRV measurement domains. Genetic influences did not differ between genders for most variables except the square root of the mean squared differences between successive R-R intervals (RMSSD, higher heritability in males) and the ratio of low to high frequency power (LF/HF, distinct genetic factors operating in males and females). The results indicate high phenotypic and especially genetic correlations between HRV measures from different domains, suggesting that >90% of genetic influences are shared across measures. Finally, about 40% of genetic variance in HRV was shared with HR. In conclusion, both HR and HRV measures are highly heritable traits in the general population of young adults, with high degree of genetic overlap across different measurement domains. PMID:27114045

  6. Learning from Experts: Fostering Extended Thinking in the Early Phases of the Design Process

    ERIC Educational Resources Information Center

    Haupt, Grietjie

    2015-01-01

    Empirical evidence on the way in which expert designers from different domains cognitively connect their internal processes with external resources is presented in the context of an extended cognition model. The article focuses briefly on the main trends in the extended design cognition theory and in particular on recent trends in information…

  7. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  8. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  9. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  10. The organization of domains in proteins obeys Menzerath-Altmann's law of language.

    PubMed

    Shahzad, Khuram; Mittenthal, Jay E; Caetano-Anollés, Gustavo

    2015-08-11

    The combination of domains in multidomain proteins enhances their function and structure but lengthens the molecules and increases their cost at cellular level. The dependence of domain length on the number of domains a protein holds was surveyed for a set of 60 proteomes representing free-living organisms from all kingdoms of life. Distributions were fitted using non-linear functions and fitted parameters interpreted with a formulation of decreasing returns. We find that domain length decreases with increasing number of domains in proteins, following the Menzerath-Altmann (MA) law of language. Highly significant negative correlations exist for the set of proteomes examined. Mathematically, the MA law expresses as a power law relationship that unfolds when molecular persistence P is a function of domain accretion. P holds two terms, one reflecting the matter-energy cost of adding domains and extending their length, the other reflecting how domain length and number impinges on information and biophysics. The pattern of diminishing returns can therefore be explained as a frustrated interplay between the strategies of economy, flexibility and robustness, matching previously observed trade-offs in the domain makeup of proteomes. Proteomes of Archaea, Fungi and to a lesser degree Plants show the largest push towards molecular economy, each at their own economic stratum. Fungi increase domain size in single domain proteins while reinforcing the pattern of diminishing returns. In contrast, Metazoa, and to lesser degrees Protista and Bacteria, relax economy. Metazoa achieves maximum flexibility and robustness by harboring compact molecules and complex domain organization, offering a new functional vocabulary for molecular biology. The tendency of parts to decrease their size when systems enlarge is universal for language and music, and now for parts of macromolecules, extending the MA law to natural systems.

  11. A hierarchical network-based algorithm for multi-scale watershed delineation

    NASA Astrophysics Data System (ADS)

    Castronova, Anthony M.; Goodall, Jonathan L.

    2014-11-01

    Watershed delineation is a process for defining a land area that contributes surface water flow to a single outlet point. It is a commonly used in water resources analysis to define the domain in which hydrologic process calculations are applied. There has been a growing effort over the past decade to improve surface elevation measurements in the U.S., which has had a significant impact on the accuracy of hydrologic calculations. Traditional watershed processing on these elevation rasters, however, becomes more burdensome as data resolution increases. As a result, processing of these datasets can be troublesome on standard desktop computers. This challenge has resulted in numerous works that aim to provide high performance computing solutions to large data, high resolution data, or both. This work proposes an efficient watershed delineation algorithm for use in desktop computing environments that leverages existing data, U.S. Geological Survey (USGS) National Hydrography Dataset Plus (NHD+), and open source software tools to construct watershed boundaries. This approach makes use of U.S. national-level hydrography data that has been precomputed using raster processing algorithms coupled with quality control routines. Our approach uses carefully arranged data and mathematical graph theory to traverse river networks and identify catchment boundaries. We demonstrate this new watershed delineation technique, compare its accuracy with traditional algorithms that derive watershed solely from digital elevation models, and then extend our approach to address subwatershed delineation. Our findings suggest that the open-source hierarchical network-based delineation procedure presented in the work is a promising approach to watershed delineation that can be used summarize publicly available datasets for hydrologic model input pre-processing. Through our analysis, we explore the benefits of reusing the NHD+ datasets for watershed delineation, and find that the our technique offers greater flexibility and extendability than traditional raster algorithms.

  12. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    PubMed

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  13. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  14. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks

    PubMed Central

    2011-01-01

    Background Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. Results A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Conclusions Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced. PMID:21849086

  15. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks.

    PubMed

    Xie, Xueying; Jin, Jing; Mao, Yongyi

    2011-08-18

    Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced.

  16. Effect of ram semen extenders and supplements on computer assisted sperm analysis parameters

    USDA-ARS?s Scientific Manuscript database

    A study evaluated the effects of ram semen extender and extender supplementation on computer assisted sperm analysis (CASA) parameters positively correlated with progressive motility. Semen collected from 5 rams was distributed across treatment combinations consisting of either TRIS citrate (T) or ...

  17. Machine learning in materials informatics: recent applications and prospects

    DOE PAGES

    Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam; ...

    2017-12-13

    Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists ormore » can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as “descriptors”, may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven “materials informatics” strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.« less

  18. Machine learning in materials informatics: recent applications and prospects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam

    Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists ormore » can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as “descriptors”, may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven “materials informatics” strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.« less

  19. Tertiary model of a plant cellulose synthase

    PubMed Central

    Sethaphong, Latsavongsakda; Haigler, Candace H.; Kubicki, James D.; Zimmer, Jochen; Bonetta, Dario; DeBolt, Seth; Yingling, Yaroslava G.

    2013-01-01

    A 3D atomistic model of a plant cellulose synthase (CESA) has remained elusive despite over forty years of experimental effort. Here, we report a computationally predicted 3D structure of 506 amino acids of cotton CESA within the cytosolic region. Comparison of the predicted plant CESA structure with the solved structure of a bacterial cellulose-synthesizing protein validates the overall fold of the modeled glycosyltransferase (GT) domain. The coaligned plant and bacterial GT domains share a six-stranded β-sheet, five α-helices, and conserved motifs similar to those required for catalysis in other GT-2 glycosyltransferases. Extending beyond the cross-kingdom similarities related to cellulose polymerization, the predicted structure of cotton CESA reveals that plant-specific modules (plant-conserved region and class-specific region) fold into distinct subdomains on the periphery of the catalytic region. Computational results support the importance of the plant-conserved region and/or class-specific region in CESA oligomerization to form the multimeric cellulose–synthesis complexes that are characteristic of plants. Relatively high sequence conservation between plant CESAs allowed mapping of known mutations and two previously undescribed mutations that perturb cellulose synthesis in Arabidopsis thaliana to their analogous positions in the modeled structure. Most of these mutation sites are near the predicted catalytic region, and the confluence of other mutation sites supports the existence of previously undefined functional nodes within the catalytic core of CESA. Overall, the predicted tertiary structure provides a platform for the biochemical engineering of plant CESAs. PMID:23592721

  20. Modeling extracellular fields for a three-dimensional network of cells using NEURON.

    PubMed

    Appukuttan, Shailesh; Brain, Keith L; Manchanda, Rohit

    2017-10-01

    Computational modeling of biological cells usually ignores their extracellular fields, assuming them to be inconsequential. Though such an assumption might be justified in certain cases, it is debatable for networks of tightly packed cells, such as in the central nervous system and the syncytial tissues of cardiac and smooth muscle. In the present work, we demonstrate a technique to couple the extracellular fields of individual cells within the NEURON simulation environment. The existing features of the simulator are extended by explicitly defining current balance equations, resulting in the coupling of the extracellular fields of adjacent cells. With this technique, we achieved continuity of extracellular space for a network model, thereby allowing the exploration of extracellular interactions computationally. Using a three-dimensional network model, passive and active electrical properties were evaluated under varying levels of extracellular volumes. Simultaneous intracellular and extracellular recordings for synaptic and action potentials were analyzed, and the potential of ephaptic transmission towards functional coupling of cells was explored. We have implemented a true bi-domain representation of a network of cells, with the extracellular domain being continuous throughout the entire model. This has hitherto not been achieved using NEURON, or other compartmental modeling platforms. We have demonstrated the coupling of the extracellular field of every cell in a three-dimensional model to obtain a continuous uniform extracellular space. This technique provides a framework for the investigation of interactions in tightly packed networks of cells via their extracellular fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Cognitive ability, academic achievement and academic self-concept: extending the internal/external frame of reference model.

    PubMed

    Chen, Ssu-Kuang; Hwang, Fang-Ming; Yeh, Yu-Chen; Lin, Sunny S J

    2012-06-01

    Marsh's internal/external (I/E) frame of reference model depicts the relationship between achievement and self-concept in specific academic domains. Few efforts have been made to examine concurrent relationships among cognitive ability, achievement, and academic self-concept (ASC) within an I/E model framework. To simultaneously examine the influences of domain-specific cognitive ability and grades on domain self-concept in an extended I/E model, including the indirect effect of domain-specific cognitive ability on domain self-concept via grades. Tenth grade respondents (628 male, 452 female) to a national adolescent survey conducted in Taiwan. Respondents completed surveys designed to measure maths and verbal aptitudes. Data on Maths and Chinese class grades and self-concepts were also collected. Statistically significant and positive path coefficients were found between cognitive ability and self-concept in the same domain (direct effect) and between these two constructs via grades (indirect effect). The cross-domain effects of either ability or grades on ASC were negatively significant. Taiwanese 10th graders tend to evaluate their ASCs based on a mix of ability and achievement, with achievement as a mediator exceeding ability as a predictor. In addition, the cross-domain effects suggest that Taiwanese students are likely to view Maths and verbal abilities and achievements as distinctly different. ©2011 The British Psychological Society.

  2. Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Auriault, Laurent

    1996-01-01

    It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.

  3. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    PubMed Central

    Jiang, Feng; Han, Ji-zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088

  4. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    PubMed

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  5. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  6. Moving Computational Domain Method and Its Application to Flow Around a High-Speed Car Passing Through a Hairpin Curve

    NASA Astrophysics Data System (ADS)

    Watanabe, Koji; Matsuno, Kenichi

    This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.

  7. Numerical models for fluid-grains interactions: opportunities and limitations

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  8. Recent Advances in Laplace Transform Analytic Element Method (LT-AEM) Theory and Application to Transient Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Kuhlman, K. L.; Neuman, S. P.

    2006-12-01

    Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.

  9. Understanding Pre-Service Teachers' Computer Attitudes: Applying and Extending the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Teo, T.; Lee, C. B.; Chai, C. S.

    2008-01-01

    Computers are increasingly widespread, influencing many aspects of our social and work lives. As we move into a technology-based society, it is important that classroom experiences with computers are made available for all students. The purpose of this study is to examine pre-service teachers' attitudes towards computers. This study extends the…

  10. A New Domain Decomposition Approach for the Gust Response Problem

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.

    2002-01-01

    A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.

  11. Shielding from space radiations

    NASA Technical Reports Server (NTRS)

    Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.

    1993-01-01

    This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.

  12. Aging Mechanisms and Control. Symposium Part A - Developments in Computational Aero- and Hydro-Acoustics. Symposium Part B - Monitoring and Management of Gas Turbine Fleets for Extended Life and Reduced Costs (Les mecanismes vieillissants et le controle) (Symposium Partie A - Developpements dans le domaine de l’aeroacoustique et I’hydroacoustique numeriques) (Symposium Partie B - Le suivi et la gestion des turbomoteurs en vue du prolongement de l

    DTIC Science & Technology

    2003-02-01

    Stromingsmechanica Industriale Pleinlaan, 2 Universita Roma Tre B-1050 Brussel via della Vasca Navale 79 em: hirsch@stro10.vub.ac.be 00146 Roma em...the flow and noise in the diffuser of an industrial gas turbine engine. A steady RANS CFD calculation and experiments were used to identify the gross...finally, defence industry was restructuring demanding that we review our relationship with them. (SYA) KN1-5 Ministers agreed that changes were

  13. Multitasking the three-dimensional transport code TORT on CRAY platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.

    1996-04-01

    The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less

  14. Theory of Mind: A Neural Prediction Problem

    PubMed Central

    Koster-Hale, Jorie; Saxe, Rebecca

    2014-01-01

    Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others’ goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind. PMID:24012000

  15. The Cerebellum: Adaptive Prediction for Movement and Cognition.

    PubMed

    Sokolov, Arseny A; Miall, R Chris; Ivry, Richard B

    2017-05-01

    Over the past 30 years, cumulative evidence has indicated that cerebellar function extends beyond sensorimotor control. This view has emerged from studies of neuroanatomy, neuroimaging, neuropsychology, and brain stimulation, with the results implicating the cerebellum in domains as diverse as attention, language, executive function, and social cognition. Although the literature provides sophisticated models of how the cerebellum helps refine movements, it remains unclear how the core mechanisms of these models can be applied when considering a broader conceptualization of cerebellar function. In light of recent multidisciplinary findings, we examine how two key concepts that have been suggested as general computational principles of cerebellar function- prediction and error-based learning- might be relevant in the operation of cognitive cerebro-cerebellar loops. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  17. Ontogeny of the sheathing leaf base in maize (Zea mays).

    PubMed

    Johnston, Robyn; Leiboff, Samuel; Scanlon, Michael J

    2015-01-01

    Leaves develop from the shoot apical meristem (SAM) via recruitment of leaf founder cells. Unlike eudicots, most monocot leaves display parallel venation and sheathing bases wherein the margins overlap the stem. Here we utilized computed tomography (CT) imaging, localization of PIN-FORMED1 (PIN1) auxin transport proteins, and in situ hybridization of leaf developmental transcripts to analyze the ontogeny of monocot leaf morphology in maize (Zea mays). CT imaging of whole-mounted shoot apices illustrates the plastochron-specific stages during initiation of the basal sheath margins from the tubular disc of insertion (DOI). PIN1 localizations identify basipetal auxin transport in the SAM L1 layer at the site of leaf initiation, a process that continues reiteratively during later recruitment of lateral leaf domains. Refinement of these auxin transport domains results in multiple, parallel provascular strands within the initiating primordium. By contrast, auxin is transported from the L2 toward the L1 at the developing margins of the leaf sheath. Transcripts involved in organ boundary formation and dorsiventral patterning accumulate within the DOI, preceding the outgrowth of the overlapping margins of the sheathing leaf base. We suggest a model wherein sheathing bases and parallel veins are both patterned via the extended recruitment of lateral maize leaf domains from the SAM. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  18. Finite-difference time-domain modelling of through-the-Earth radio signal propagation

    NASA Astrophysics Data System (ADS)

    Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.

    2015-12-01

    This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.

  19. Influence of the valine zipper region on the structure and aggregation of the basic leucine zipper (bZIP) domain of activating transcription factor 5 (ATF5).

    PubMed

    Ciaccio, Natalie A; Reynolds, T Steele; Middaugh, C Russell; Laurence, Jennifer S

    2012-11-05

    Protein aggregation is a major problem for biopharmaceuticals. While the control of aggregation is critically important for the future of protein pharmaceuticals, mechanisms of aggregate assembly, particularly the role that structure plays, are still poorly understood. Increasing evidence indicates that partially folded intermediates critically influence the aggregation pathway. We have previously reported the use of the basic leucine zipper (bZIP) domain of activating transcription factor 5 (ATF5) as a partially folded model system to investigate protein aggregation. This domain contains three regions with differing structural propensity: a N-terminal polybasic region, a central helical leucine zipper region, and a C-terminal extended valine zipper region. Additionally, a centrally positioned cysteine residue readily forms an intermolecular disulfide bond that reduces aggregation. Computational analysis of ATF5 predicts that the valine zipper region facilitates self-association. Here we test this hypothesis using a truncated mutant lacking the C-terminal valine zipper region. We compare the structure and aggregation of this mutant to the wild-type (WT) form under both reducing and nonreducing conditions. Our data indicate that removal of this region results in a loss of α-helical structure in the leucine zipper and a change in the mechanism of self-association. The mutant form displays increased association at low temperature but improved resistance to thermally induced aggregation.

  20. Convergence issues in domain decomposition parallel computation of hovering rotor

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong

    2018-05-01

    Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.

  1. Molecular dynamics simulations of site point mutations in the TPR domain of cyclophilin 40 identify conformational states with distinct dynamic and enzymatic properties

    NASA Astrophysics Data System (ADS)

    Gur, Mert; Blackburn, Elizabeth A.; Ning, Jia; Narayan, Vikram; Ball, Kathryn L.; Walkinshaw, Malcolm D.; Erman, Burak

    2018-04-01

    Cyclophilin 40 (Cyp40) is a member of the immunophilin family that acts as a peptidyl-prolyl-isomerase enzyme and binds to the heat shock protein 90 (Hsp90). Its structure comprises an N-terminal cyclophilin domain and a C-terminal tetratricopeptide (TPR) domain. Cyp40 is overexpressed in prostate cancer and certain T-cell lymphomas. The groove for Hsp90 binding on the TPR domain includes residues Lys227 and Lys308, referred to as the carboxylate clamp, and is essential for Cyp40-Hsp90 binding. In this study, the effect of two mutations, K227A and K308A, and their combinative mutant was investigated by performing a total of 5.76 μs of all-atom molecular dynamics (MD) simulations in explicit solvent. All simulations, except the K308A mutant, were found to adopt two distinct (extended or compact) conformers defined by different cyclophilin-TPR interdomain distances. The K308A mutant was only observed in the extended form which is observed in the Cyp40 X-ray structure. The wild-type, K227A, and combined mutant also showed bimodal distributions. The experimental melting temperature, Tm, values of the mutants correlate with the degree of compactness with the K308A extended mutant having a marginally lower melting temperature. Another novel measure of compactness determined from the MD data, the "coordination shell volume," also shows a direct correlation with Tm. In addition, the MD simulations show an allosteric effect with the mutations in the remote TPR domain having a pronounced effect on the molecular motions of the enzymatic cyclophilin domain which helps rationalise the experimentally observed increase in enzyme activity measured for all three mutations.

  2. Structure of a two-CAP-domain protein from the human hookworm parasite Necator americanus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asojo, Oluwatoyin A., E-mail: oasojo@unmc.edu

    2011-05-01

    The first structure of a two-CAP-domain protein, Na-ASP-1, from the major human hookworm parasite N. americanus refined to a resolution limit of 2.2 Å is presented. Major proteins secreted by the infective larval stage hookworms upon host entry include Ancylostoma secreted proteins (ASPs), which are characterized by one or two CAP (cysteine-rich secretory protein/antigen 5/pathogenesis related-1) domains. The CAP domain has been reported in diverse phylogenetically unrelated proteins, but has no confirmed function. The first structure of a two-CAP-domain protein, Na-ASP-1, from the major human hookworm parasite Necator americanus was refined to a resolution limit of 2.2 Å. The structuremore » was solved by molecular replacement (MR) using Na-ASP-2, a one-CAP-domain ASP, as the search model. The correct MR solution could only be obtained by truncating the polyalanine model of Na-ASP-2 and removing several loops. The structure reveals two CAP domains linked by an extended loop. Overall, the carboxyl-terminal CAP domain is more similar to Na-ASP-2 than to the amino-terminal CAP domain. A large central cavity extends from the amino-terminal CAP domain to the carboxyl-terminal CAP domain, encompassing the putative CAP-binding cavity. The putative CAP-binding cavity is a characteristic cavity in the carboxyl-terminal CAP domain that contains a His and Glu pair. These residues are conserved in all single-CAP-domain proteins, but are absent in the amino-terminal CAP domain. The conserved His residues are oriented such that they appear to be capable of directly coordinating a zinc ion as observed for CAP proteins from reptile venoms. This first structure of a two-CAP-domain ASP can serve as a template for homology modeling of other two-CAP-domain proteins.« less

  3. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  4. Development of a new model for short period ocean tidal variations of Earth rotation

    NASA Astrophysics Data System (ADS)

    Schuh, Harald

    2015-08-01

    Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.

  5. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  6. Rapid Frequency Chirps of TAE mode due to Finite Orbit Energetic Particles

    NASA Astrophysics Data System (ADS)

    Berk, Herb; Wang, Ge

    2013-10-01

    The tip model for the TAE mode in the large aspect ratio limit, conceived by Rosenbluth et al. in the frequency domain, together with an interaction term in the frequency domain based on a map model, has been extended into the time domain. We present the formal basis for the model, starting with the Lagrangian for the particle wave interaction. We shall discuss the formal nonlinear time domain problem and the procedure that needs to obtain solutions in the adiabatic limit.

  7. A general structure-property relationship to predict the enthalpy of vaporisation at ambient temperatures.

    PubMed

    Oberg, T

    2007-01-01

    The vapour pressure is the most important property of an anthropogenic organic compound in determining its partitioning between the atmosphere and the other environmental media. The enthalpy of vaporisation quantifies the temperature dependence of the vapour pressure and its value around 298 K is needed for environmental modelling. The enthalpy of vaporisation can be determined by different experimental methods, but estimation methods are needed to extend the current database and several approaches are available from the literature. However, these methods have limitations, such as a need for other experimental results as input data, a limited applicability domain, a lack of domain definition, and a lack of predictive validation. Here we have attempted to develop a quantitative structure-property relationship (QSPR) that has general applicability and is thoroughly validated. Enthalpies of vaporisation at 298 K were collected from the literature for 1835 pure compounds. The three-dimensional (3D) structures were optimised and each compound was described by a set of computationally derived descriptors. The compounds were randomly assigned into a calibration set and a prediction set. Partial least squares regression (PLSR) was used to estimate a low-dimensional QSPR model with 12 latent variables. The predictive performance of this model, within the domain of application, was estimated at n=560, q2Ext=0.968 and s=0.028 (log transformed values). The QSPR model was subsequently applied to a database of 100,000+ structures, after a similar 3D optimisation and descriptor generation. Reliable predictions can be reported for compounds within the previously defined applicability domain.

  8. Integration into Big Data: First Steps to Support Reuse of Comprehensive Toxicity Model Modules (SOT)

    EPA Science Inventory

    Data surrounding the needs of human disease and toxicity modeling are largely siloed limiting the ability to extend and reuse modules across knowledge domains. Using an infrastructure that supports integration across knowledge domains (animal toxicology, high-throughput screening...

  9. Pair correlation functions for identifying spatial correlation in discrete domains

    NASA Astrophysics Data System (ADS)

    Gavagnin, Enrico; Owen, Jennifer P.; Yates, Christian A.

    2018-06-01

    Identifying and quantifying spatial correlation are important aspects of studying the collective behavior of multiagent systems. Pair correlation functions (PCFs) are powerful statistical tools that can provide qualitative and quantitative information about correlation between pairs of agents. Despite the numerous PCFs defined for off-lattice domains, only a few recent studies have considered a PCF for discrete domains. Our work extends the study of spatial correlation in discrete domains by defining a new set of PCFs using two natural and intuitive definitions of distance for a square lattice: the taxicab and uniform metric. We show how these PCFs improve upon previous attempts and compare between the quantitative data acquired. We also extend our definitions of the PCF to other types of regular tessellation that have not been studied before, including hexagonal, triangular, and cuboidal. Finally, we provide a comprehensive PCF for any tessellation and metric, allowing investigation of spatial correlation in irregular lattices for which recognizing correlation is less intuitive.

  10. Influence of computer work under time pressure on cardiac activity.

    PubMed

    Shi, Ping; Hu, Sijung; Yu, Hongliu

    2015-03-01

    Computer users are often under stress when required to complete computer work within a required time. Work stress has repeatedly been associated with an increased risk for cardiovascular disease. The present study examined the effects of time pressure workload during computer tasks on cardiac activity in 20 healthy subjects. Heart rate, time domain and frequency domain indices of heart rate variability (HRV) and Poincaré plot parameters were compared among five computer tasks and two rest periods. Faster heart rate and decreased standard deviation of R-R interval were noted in response to computer tasks under time pressure. The Poincaré plot parameters showed significant differences between different levels of time pressure workload during computer tasks, and between computer tasks and the rest periods. In contrast, no significant differences were identified for the frequency domain indices of HRV. The results suggest that the quantitative Poincaré plot analysis used in this study was able to reveal the intrinsic nonlinear nature of the autonomically regulated cardiac rhythm. Specifically, heightened vagal tone occurred during the relaxation computer tasks without time pressure. In contrast, the stressful computer tasks with added time pressure stimulated cardiac sympathetic activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. An Efficient Semi-supervised Learning Approach to Predict SH2 Domain Mediated Interactions.

    PubMed

    Kundu, Kousik; Backofen, Rolf

    2017-01-01

    Src homology 2 (SH2) domain is an important subclass of modular protein domains that plays an indispensable role in several biological processes in eukaryotes. SH2 domains specifically bind to the phosphotyrosine residue of their binding peptides to facilitate various molecular functions. For determining the subtle binding specificities of SH2 domains, it is very important to understand the intriguing mechanisms by which these domains recognize their target peptides in a complex cellular environment. There are several attempts have been made to predict SH2-peptide interactions using high-throughput data. However, these high-throughput data are often affected by a low signal to noise ratio. Furthermore, the prediction methods have several additional shortcomings, such as linearity problem, high computational complexity, etc. Thus, computational identification of SH2-peptide interactions using high-throughput data remains challenging. Here, we propose a machine learning approach based on an efficient semi-supervised learning technique for the prediction of 51 SH2 domain mediated interactions in the human proteome. In our study, we have successfully employed several strategies to tackle the major problems in computational identification of SH2-peptide interactions.

  12. Self-consistent field theory simulations of polymers on arbitrary domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris

    2016-12-15

    We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less

  13. What are the Geophysical Fingerprints of hyper-extended Crustal Domains ?

    NASA Astrophysics Data System (ADS)

    Stanton, N.; Manatschal, G.; Maia, M.; Viana, A.; Tugend, J.; Autin, J.

    2012-04-01

    The Iberian margin is a well-studied region and presently the best tectonic setting for understanding the dynamic process of margin's formation and evolution. The world largest available dataset enabled to properly constrain the crustal structure and opened new paradigms for passive margins studies. Nevertheless, there are numerous remaining questions, as for example what is the spatial extent of continental inheritance along the margin and what is the role of fluids (serpentinization/magmatism) during margin's formation/deformation? The observation of a hyper-extended crustal domain, now also identified in other margins reveals the highly diverse nature of the crust along rifted margins. What are its physical properties and how do they change laterally? The aim of this study is to explore the physical signature of the serpentinized crust, which composes this hyper-extended domain, to identify the limits of the system and discuss its nature and importance. To investigate the lateral variation of crustal types we use integrated gravity, magnetic, seismic and available geological/well data. Transformations on the potential field data enable us to enhance the horizontal and vertical variations of the crust, and future forward modeling will provide a geological correlation for Iberia. The preliminary results showed that the transitional crust can be subdivided into two zones, regarding their different geophysical signatures: from the necking zone, the continent ward transitional crust displays decreasing gravity anomaly, low horizontal gradient and smooth magnetic anomalies; towards offshore (to the west of the J anomaly) the transitional crust is characterized by a semi-cyclic magnetic anomaly pattern, with increasing gravity, showing a stronger horizontal gradient and rough bathymetry. We associate this transitional domain with an embryonic oceanic type crust. Comparisons with other margins along the North Atlantic, despite the great spatial variation, reveals preliminarily that the hyper-extended crust at the non-volcanic Iberia Margin displays intrinsic characteristics distinct from the more volcanic transitional domains to the north. The physical properties of the different crustal types will be further modeled to properly constrain their characteristics. The final results shall enable us to identify the lateral transition between the different continental-transitional hydrated-oceanic crustal types and potentially would allow us to identify similar domains worldwide.

  14. Diffraction of seismic waves from 3-D canyons and alluvial basins modeled using the Fast Multipole-accelerated BEM

    NASA Astrophysics Data System (ADS)

    Chaillat, S.; Bonnet, M.; Semblat, J.

    2007-12-01

    Seismic wave propagation and amplification in complex media is a major issue in the field of seismology. To compute seismic wave propagation in complex geological structures such as in alluvial basins, various numerical methods have been proposed. The main advantage of the Boundary Element Method (BEM) is that only the domain boundaries (and possibly interfaces) are discretized, leading to a reduction of the number of degrees of freedom. The main drawback of the standard BEM is that the governing matrix is full and non- symmetric, which gives rise to high computational and memory costs. In other areas where the BEM is used (electromagnetism, acoustics), considerable speedup of solution time and decrease of memory requirements have been achieved through the development, over the last decade, of the Fast Multipole Method (FMM). The goal of the FMM is to speed up the matrix-vector product computation needed at each iteration of the GMRES iterative solver. Moreover, the governing matrix is never explicitly formed, which leads to a storage requirement well below the memory necessary for holding the complete matrix. The FMM-accelerated BEM therefore achieves substantial savings in both CPU time and memory. In this work, the FMM is extended to the 3-D frequency-domain elastodynamics and applied to the computation of seismic wave propagation in 3-D. The efficiency of the present FMM-BEM is demonstrated on seismology- oriented examples. First, the diffraction of a plane wave or a point source by a 3-D canyon is studied. The influence of the size of the meshed part of the free surface is studied, and computations are performed for non- dimensional frequencies higher than those considered in other studies (thanks to the use of the FM-BEM), with which comparisons are made whenever possible. The method is also applied to analyze the diffraction of a plane wave or a point source by a 3-D alluvial basin. A parametrical study is performed on the effect of the shape of the basin and the interaction of the wavefield with the basin edges is analyzed.

  15. Uncovering the formation and selection of benzylmalonyl-CoA from the biosynthesis of splenocin and enterocin reveals a versatile way to introduce amino acids into polyketide carbon scaffolds.

    PubMed

    Chang, Chenchen; Huang, Rong; Yan, Yan; Ma, Hongmin; Dai, Zheng; Zhang, Benying; Deng, Zixin; Liu, Wen; Qu, Xudong

    2015-04-01

    Selective modification of carbon scaffolds via biosynthetic engineering is important for polyketide structural diversification. Yet, this scope is currently restricted to simple aliphatic groups due to (1) limited variety of CoA-linked extender units, which lack aromatic structures and chemical reactivity, and (2) narrow acyltransferase (AT) specificity, which is limited to aliphatic CoA-linked extender units. In this report, we uncovered and characterized the first aromatic CoA-linked extender unit benzylmalonyl-CoA from the biosynthetic pathways of splenocin and enterocin in Streptomyces sp. CNQ431. Its synthesis employs a deamination/reductive carboxylation strategy to convert phenylalanine into benzylmalonyl-CoA, providing a link between amino acid and CoA-linked extender unit synthesis. By characterization of its selection, we further validated that AT domains of splenocin, and antimycin polyketide synthases are able to select this extender unit to introduce the phenyl group into their dilactone scaffolds. The biosynthetic machinery involved in the formation of this extender unit is highly versatile and can be potentially tailored for tyrosine, histidine and aspartic acid. The disclosed aromatic extender unit, amino acid-oriented synthetic pathway, and aromatic-selective AT domains provides a systematic breakthrough toward current knowledge of polyketide extender unit formation and selection, and also opens a route for further engineering of polyketide carbon scaffolds using amino acids.

  16. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  17. Genoarchitecture of the extended amygdala in zebra finch, and expression of FoxP2 in cell corridors of different genetic profile.

    PubMed

    Vicario, Alba; Mendoza, Ezequiel; Abellán, Antonio; Scharff, Constance; Medina, Loreta

    2017-01-01

    We used a battery of genes encoding transcription factors (Pax6, Islet1, Nkx2.1, Lhx6, Lhx5, Lhx9, FoxP2) and neuropeptides to study the extended amygdala in developing zebra finches. We identified different components of the central extended amygdala comparable to those found in mice and chickens, including the intercalated amygdalar cells, the central amygdala, and the lateral bed nucleus of the stria terminalis. Many cells likely originate in the dorsal striatal domain, ventral striatal domain, or the pallidal domain, as is the case in mice and chickens. Moreover, a cell subpopulation of the central extended amygdala appears to originate in the prethalamic eminence. As a general principle, these different cells with specific genetic profiles and embryonic origin form separate or partially intermingled cell corridors along the extended amygdala, which may be involved in different functional pathways. In addition, we identified the medial amygdala of the zebra finch. Like in the chickens and mice, it is located in the subpallium and is rich in cells of pallido-preoptic origin, containing minor subpopulations of immigrant cells from the ventral pallium, alar hypothalamus and prethalamic eminence. We also proposed that the medial bed nucleus of the stria terminalis is composed of several parallel cell corridors with different genetic profile and embryonic origin: preoptic, pallidal, hypothalamic, and prethalamic. Several of these cell corridors with distinct origin express FoxP2, a transcription factor implicated in synaptic plasticity. Our results pave the way for studies using zebra finches to understand the neural basis of social behavior, in which the extended amygdala is involved.

  18. Public Domain Microcomputer Software for Forestry.

    ERIC Educational Resources Information Center

    Martin, Les

    A project was conducted to develop a computer forestry/forest products bibliography applicable to high school and community college vocational/technical programs. The project director contacted curriculum clearinghouses, computer companies, and high school and community college instructors in order to obtain listings of public domain programs for…

  19. Characterization and Measurement of Passive and Active Metamaterial Devices

    DTIC Science & Technology

    2010-03-01

    A periodic bound- ary mirrors the computational domain along an axis. Unit cell boundary conditions mirror the computational domain along two axes... mirrored a number of times in each direction to create a square matrix of ring resonators. Figure 33(b) shows a 4× 4 array. The frequency domain...created by mirroring the previous structure three times. Thus, the dimensions of the particles are identical. The same boundary conditions and spacing

  20. Algorithm for Wavefront Sensing Using an Extended Scene

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph; Ohara, Catherine

    2008-01-01

    A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.

  1. Salvador has an extended SARAH domain that mediates binding to Hippo kinase.

    PubMed

    Cairns, Leah; Tran, Thao; Fowl, Brendan H; Patterson, Angela; Kim, Yoo Jin; Bothner, Brian; Kavran, Jennifer M

    2018-04-13

    The Hippo pathway controls cell proliferation and differentiation through the precisely tuned activity of a core kinase cassette. The activity of Hippo kinase is modulated by interactions between its C-terminal coiled-coil, termed the SARAH domain, and the SARAH domains of either dRassF or Salvador. Here, we wanted to understand the molecular basis of SARAH domain-mediated interactions and their influence on Hippo kinase activity. We focused on Salvador, a positive effector of Hippo activity and the least well-characterized SARAH domain-containing protein. We determined the crystal structure of a complex between Salvador and Hippo SARAH domains from Drosophila This structure provided insight into the organization of the Salvador SARAH domain including a folded N-terminal extension that expands the binding interface with Hippo SARAH domain. We also found that this extension improves the solubility of the Salvador SARAH domain, enhances binding to Hippo, and is unique to Salvador. We therefore suggest expanding the definition of the Salvador SARAH domain to include this extended region. The heterodimeric assembly observed in the crystal was confirmed by cross-linked MS and provided a structural basis for the mutually exclusive interactions of Hippo with either dRassF or Salvador. Of note, Salvador influenced the kinase activity of Mst2, the mammalian Hippo homolog. In co-transfected HEK293T cells, human Salvador increased the levels of Mst2 autophosphorylation and Mst2-mediated phosphorylation of select substrates, whereas Salvador SARAH domain inhibited Mst2 autophosphorylation in vitro These results suggest Salvador enhances the effects of Hippo kinase activity at multiple points in the Hippo pathway. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.

  2. Calculus domains modelled using an original bool algebra based on polygons

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  3. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  4. Frequency domain FIR and IIR adaptive filters

    NASA Technical Reports Server (NTRS)

    Lynn, D. W.

    1990-01-01

    A discussion of the LMS adaptive filter relating to its convergence characteristics and the problems associated with disparate eigenvalues is presented. This is used to introduce the concept of proportional convergence. An approach is used to analyze the convergence characteristics of block frequency-domain adaptive filters. This leads to a development showing how the frequency-domain FIR adaptive filter is easily modified to provide proportional convergence. These ideas are extended to a block frequency-domain IIR adaptive filter and the idea of proportional convergence is applied. Experimental results illustrating proportional convergence in both FIR and IIR frequency-domain block adaptive filters is presented.

  5. Evaluation of stress intensity factors for bi-material interface cracks using displacement jump methods

    NASA Astrophysics Data System (ADS)

    Nehar, K. C.; Hachi, B. E.; Cazes, F.; Haboussi, M.

    2017-12-01

    The aim of the present work is to investigate the numerical modeling of interfacial cracks that may appear at the interface between two isotropic elastic materials. The extended finite element method is employed to analyze brittle and bi-material interfacial fatigue crack growth by computing the mixed mode stress intensity factors (SIF). Three different approaches are introduced to compute the SIFs. In the first one, mixed mode SIF is deduced from the computation of the contour integral as per the classical J-integral method, whereas a displacement method is used to evaluate the SIF by using either one or two displacement jumps located along the crack path in the second and third approaches. The displacement jump method is rather classical for mono-materials, but has to our knowledge not been used up to now for a bi-material. Hence, use of displacement jump for characterizing bi-material cracks constitutes the main contribution of the present study. Several benchmark tests including parametric studies are performed to show the effectiveness of these computational methodologies for SIF considering static and fatigue problems of bi-material structures. It is found that results based on the displacement jump methods are in a very good agreement with those of exact solutions, such as for the J-integral method, but with a larger domain of applicability and a better numerical efficiency (less time consuming and less spurious boundary effect).

  6. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  7. Automatic violence detection in digital movies

    NASA Astrophysics Data System (ADS)

    Fischer, Stephan

    1996-11-01

    Research on computer-based recognition of violence is scant. We are working on the automatic recognition of violence in digital movies, a first step towards the goal of a computer- assisted system capable of protecting children against TV programs containing a great deal of violence. In the video domain a collision detection and a model-mapping to locate human figures are run, while the creation and comparison of fingerprints to find certain events are run int he audio domain. This article centers on the recognition of fist- fights in the video domain and on the recognition of shots, explosions and cries in the audio domain.

  8. Direct numerical simulation of turbulent plane Couette flow under neutral and stable stratification

    NASA Astrophysics Data System (ADS)

    Mortikov, Evgeny

    2017-11-01

    Direct numerical simulation (DNS) approach was used to study turbulence dynamics in plane Couette flow under conditions ranging from neutral stability to the case of extreme stable stratification, where intermittency is observed. Simulations were performed for Reynolds numbers, based on the channel height and relative wall speed, up to 2 ×105 . Using DNS data, which covers a wide range of stability conditions, parameterizations of pressure correlation terms used in second-order closure turbulence models are discussed. Particular attention is also paid to the sustainment of intermittent turbulence under strong stratification. Intermittent regime is found to be associated with the formation of secondary large-scale structures elongated in the spanwise direction, which define spatially confined alternating regions of laminar and turbulent flow. The spanwise length of this structures increases with the increase in the bulk Richardson number and defines and additional constraint on the computational box size. In this work DNS results are presented in extended computational domains, where the intermittent turbulence is sustained for sufficiently higher Richardson numbers than previously reported.

  9. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  10. Computational Analysis of a Wells Turbine with Flexible Trailing Edges

    NASA Astrophysics Data System (ADS)

    Kincaid, Kellis; Macphee, David

    2017-11-01

    The Wells turbine is often used to produce a net positive power from an oscillating air column excited by ocean waves. It has been parametrically studied quite thoroughly in the past, both experimentally and numerically. The effects of various characteristics such as blade count and profile, solidity, and tip gap are well known. Several three-dimensional computational studies have been carried out using commercial code to investigate many phenomena detected in experiments: hysteresis, tip-gap drag, and post-stall behavior for example. In this work, the open-source code Foam-Extend is used to examine the effect of flexible blades on the performance of the Wells turbine. A new solver is created to integrate fluid-structure interaction into the code, allowing an accurate solution for both the solid and fluid domains. Reynolds-averaged governing equations are employed in a fully transient solution model. The elastic modulus of the flexible portion of the blade and the tip-gap width are varied, and the resulting flow fields are investigated to determine the cause of any performance differences. NSF Grant EEC 1659710.

  11. Overview of Aro Program on Network Science for Human Decision Making

    NASA Astrophysics Data System (ADS)

    West, Bruce J.

    This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.

  12. A quantum–quantum Metropolis algorithm

    PubMed Central

    Yung, Man-Hong; Aspuru-Guzik, Alán

    2012-01-01

    The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584

  13. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  14. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  15. Structure prediction of the second extracellular loop in G-protein-coupled receptors.

    PubMed

    Kmiecik, Sebastian; Jamroz, Michal; Kolinski, Michal

    2014-06-03

    G-protein-coupled receptors (GPCRs) play key roles in living organisms. Therefore, it is important to determine their functional structures. The second extracellular loop (ECL2) is a functionally important region of GPCRs, which poses significant challenge for computational structure prediction methods. In this work, we evaluated CABS, a well-established protein modeling tool for predicting ECL2 structure in 13 GPCRs. The ECL2s (with between 13 and 34 residues) are predicted in an environment of other extracellular loops being fully flexible and the transmembrane domain fixed in its x-ray conformation. The modeling procedure used theoretical predictions of ECL2 secondary structure and experimental constraints on disulfide bridges. Our approach yielded ensembles of low-energy conformers and the most populated conformers that contained models close to the available x-ray structures. The level of similarity between the predicted models and x-ray structures is comparable to that of other state-of-the-art computational methods. Our results extend other studies by including newly crystallized GPCRs. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. A Third-Order Item Response Theory Model for Modeling the Effects of Domains and Subdomains in Large-Scale Educational Assessment Surveys

    ERIC Educational Resources Information Center

    Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia

    2014-01-01

    Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…

  17. Internal Versus External DSLs for Trace Analysis: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Havelund, Klaus

    2011-01-01

    This tutorial explores the design and implementation issues arising in the development of domain-specific languages for trace analysis. It introduces the audience to the general concepts underlying such special-purpose languages building upon the authors' own experiences in developing both external domain specific languages and systems, such as EAGLE, HAWK, RULER and LOGSCOPE, and the more recent internal domain-specific language and system TRACECONTRACT within the SCALA language.

  18. Structural Interface Forms and Their Involvement in Stabilization of Multidomain Proteins or Protein Complexes.

    PubMed

    Dygut, Jacek; Kalinowska, Barbara; Banach, Mateusz; Piwowar, Monika; Konieczny, Leszek; Roterman, Irena

    2016-10-18

    The presented analysis concerns the inter-domain and inter-protein interface in protein complexes. We propose extending the traditional understanding of the protein domain as a function of local compactness with an additional criterion which refers to the presence of a well-defined hydrophobic core. Interface areas in selected homodimers vary with respect to their contribution to share as well as individual (domain-specific) hydrophobic cores. The basic definition of a protein domain, i.e., a structural unit characterized by tighter packing than its immediate environment, is extended in order to acknowledge the role of a structured hydrophobic core, which includes the interface area. The hydrophobic properties of interfaces vary depending on the status of interacting domains-In this context we can distinguish: (1) Shared hydrophobic cores (spanning the whole dimer); (2) Individual hydrophobic cores present in each monomer irrespective of whether the dimer contains a shared core. Analysis of interfaces in dystrophin and utrophin indicates the presence of an additional quasi-domain with a prominent hydrophobic core, consisting of fragments contributed by both monomers. In addition, we have also attempted to determine the relationship between the type of interface (as categorized above) and the biological function of each complex. This analysis is entirely based on the fuzzy oil drop model.

  19. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  20. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  1. Domain Decomposition: A Bridge between Nature and Parallel Computers

    DTIC Science & Technology

    1992-09-01

    B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic

  2. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    NASA Astrophysics Data System (ADS)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  3. The Twist Tensor Nuclear Norm for Video Completion.

    PubMed

    Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui

    2017-12-01

    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

  4. Reconstruction of Vectorial Acoustic Sources in Time-Domain Tomography

    PubMed Central

    Xia, Rongmin; Li, Xu; He, Bin

    2009-01-01

    A new theory is proposed for the reconstruction of curl-free vector field, whose divergence serves as acoustic source. The theory is applied to reconstruct vector acoustic sources from the scalar acoustic signals measured on a surface enclosing the source area. It is shown that, under certain conditions, the scalar acoustic measurements can be vectorized according to the known measurement geometry and subsequently be used to reconstruct the original vector field. Theoretically, this method extends the application domain of the existing acoustic reciprocity principle from a scalar field to a vector field, indicating that the stimulating vectorial source and the transmitted acoustic pressure vector (acoustic pressure vectorized according to certain measurement geometry) are interchangeable. Computer simulation studies were conducted to evaluate the proposed theory, and the numerical results suggest that reconstruction of a vector field using the proposed theory is not sensitive to variation in the detecting distance. The present theory may be applied to magnetoacoustic tomography with magnetic induction (MAT-MI) for reconstructing current distribution from acoustic measurements. A simulation on MAT-MI shows that, compared to existing methods, the present method can give an accurate estimation on the source current distribution and a better conductivity reconstruction. PMID:19211344

  5. Smart Sensors: Why and when the origin was and why and where the future will be

    NASA Astrophysics Data System (ADS)

    Corsi, C.

    2013-12-01

    Smart Sensors is a technique developed in the 70's when the processing capabilities, based on readout integrated with signal processing, was still far from the complexity needed in advanced IR surveillance and warning systems, because of the enormous amount of noise/unwanted signals emitted by operating scenario especially in military applications. The Smart Sensors technology was kept restricted within a close military environment exploding in applications and performances in the 90's years thanks to the impressive improvements in the integrated signal read-out and processing achieved by CCD-CMOS technologies in FPA. In fact the rapid advances of "very large scale integration" (VLSI) processor technology and mosaic EO detector array technology allowed to develop new generations of Smart Sensors with much improved signal processing by integrating microcomputers and other VLSI signal processors. inside the sensor structure achieving some basic functions of living eyes (dynamic stare, non-uniformity compensation, spatial and temporal filtering). New and future technologies (Nanotechnology, Bio-Organic Electronics, Bio-Computing) are lightning a new generation of Smart Sensors extending the Smartness from the Space-Time Domain to Spectroscopic Functional Multi-Domain Signal Processing. History and future forecasting of Smart Sensors will be reported.

  6. Analysis of dual-frequency MEMS antenna using H-MRTD method

    NASA Astrophysics Data System (ADS)

    Yu, Wenge; Zhong, Xianxin; Chen, Yu; Wu, Zhengzhong

    2004-10-01

    For applying micro/nano technologies and Micro-Electro-Mechanical System (MEMS) technologies in the Radio Frequency (RF) field to manufacture miniature microstrip antennas. A novel MEMS dual-band patch antenna designed using slot-loaded and short-circuited size-reduction techniques is presented in this paper. By controlling the short-plane width, the two resonant frequencies, f10 and f30, can be significantly reduced and the frequency ratio (f30/f10) is tunable in the range 1.7~2.3. The Haar-Wavelet-Based multiresolution time domain (H-MRTD) with compactly supported scaling function for a full three-dimensional (3-D) wave to Yee's staggered cell is used for modeling and analyzing the antenna for the first time. Associated with practical model, an uniaxial perfectly matched layer (UPML) absorbing boundary conditions was developed, In addition , extending the mathematical formulae to an inhomogenous media. Numerical simulation results are compared with those using the conventional 3-D finite-difference time-domain (FDTD) method and measured. It has been demonstrated that, with this technique, space discretization with only a few cells per wavelength gives accurate results, leading to a reduction of both memory requirement and computation time.

  7. Electron Energy-Loss Spectroscopy (EELS)Calculation in Finite-Difference Time-Domain (FDTD) Package: EELS-FDTD

    NASA Astrophysics Data System (ADS)

    Large, Nicolas; Cao, Yang; Manjavacas, Alejandro; Nordlander, Peter

    2015-03-01

    Electron energy-loss spectroscopy (EELS) is a unique tool that is extensively used to investigate the plasmonic response of metallic nanostructures since the early works in the '50s. To be able to interpret and theoretically investigate EELS results, a myriad of different numerical techniques have been developed for EELS simulations (BEM, DDA, FEM, GDTD, Green dyadic functions). Although these techniques are able to predict and reproduce experimental results, they possess significant drawbacks and are often limited to highly symmetrical geometries, non-penetrating trajectories, small nanostructures, and free standing nanostructures. We present here a novel approach for EELS calculations using the Finite-difference time-domain (FDTD) method: EELS-FDTD. We benchmark our approach by direct comparison with results from the well-established boundary element method (BEM) and published experimental results. In particular, we compute EELS spectra for spherical nanoparticles, nanoparticle dimers, nanodisks supported by various substrates, and gold bowtie antennas on a silicon nitride substrate. Our EELS-FDTD implementation can be easily extended to more complex geometries and configurations and can be directly implemented within other numerical methods. Work funded by the Welch Foundation (C-1222, L-C-004), and the NSF (CNS-0821727, OCI-0959097).

  8. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  9. Comprehensive security framework for the communication and storage of medical images

    NASA Astrophysics Data System (ADS)

    Slik, David; Montour, Mike; Altman, Tym

    2003-05-01

    Confidentiality, integrity verification and access control of medical imagery and associated metadata is critical for the successful deployment of integrated healthcare networks that extend beyond the department level. As medical imagery continues to become widely accessed across multiple administrative domains and geographically distributed locations, image data should be able to travel and be stored on untrusted infrastructure, including public networks and server equipment operated by external entities. Given these challenges associated with protecting large-scale distributed networks, measures must be taken to protect patient identifiable information while guarding against tampering, denial of service attacks, and providing robust audit mechanisms. The proposed framework outlines a series of security practices for the protection of medical images, incorporating Transport Layer Security (TLS), public and secret key cryptography, certificate management and a token based trusted computing base. It outlines measures that can be utilized to protect information stored within databases, online and nearline storage, and during transport over trusted and untrusted networks. In addition, it provides a framework for ensuring end-to-end integrity of image data from acquisition to viewing, and presents a potential solution to the challenges associated with access control across multiple administrative domains and institution user bases.

  10. Fully 3D modeling of tokamak vertical displacement events with realistic parameters

    NASA Astrophysics Data System (ADS)

    Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava

    2016-10-01

    In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.

  11. A Numerical Model of Unsteady, Subsonic Aeroelastic Behavior. Ph.D Thesis

    NASA Technical Reports Server (NTRS)

    Strganac, Thomas W.

    1987-01-01

    A method for predicting unsteady, subsonic aeroelastic responses was developed. The technique accounts for aerodynamic nonlinearities associated with angles of attack, vortex-dominated flow, static deformations, and unsteady behavior. The fluid and the wing together are treated as a single dynamical system, and the equations of motion for the structure and flow field are integrated simultaneously and interactively in the time domain. The method employs an iterative scheme based on a predictor-corrector technique. The aerodynamic loads are computed by the general unsteady vortex-lattice method and are determined simultaneously with the motion of the wing. Because the unsteady vortex-lattice method predicts the wake as part of the solution, the history of the motion is taken into account; hysteresis is predicted. Two models are used to demonstrate the technique: a rigid wing on an elastic support experiencing plunge and pitch about the elastic axis, and an elastic wing rigidly supported at the root chord experiencing spanwise bending and twisting. The method can be readily extended to account for structural nonlinearities and/or substitute aerodynamic load models. The time domain solution coupled with the unsteady vortex-lattice method provides the capability of graphically depicting wing and wake motion.

  12. Proposal for automated transformations on single-photon multipath qudits

    NASA Astrophysics Data System (ADS)

    Baldijão, R. D.; Borges, G. F.; Marques, B.; Solís-Prosser, M. A.; Neves, L.; Pádua, S.

    2017-09-01

    We propose a method for implementing automated state transformations on single-photon multipath qudits encoded in a one-dimensional transverse spatial domain. It relies on transferring the encoding from this domain to the orthogonal one by applying a spatial phase modulation with diffraction gratings, merging all the initial propagation paths by using a stable interferometric network, and filtering out the unwanted diffraction orders. The automation feature is attained by utilizing a programmable phase-only spatial light modulator (SLM) where properly designed diffraction gratings displayed on its screen will implement the desired transformations, including, among others, projections, permutations, and random operations. We discuss the losses in the process which is, in general, inherently nonunitary. Some examples of transformations are presented and, considering a realistic scenario, we analyze how they will be affected by the pixelated structure of the SLM screen. The method proposed here enables one to implement much more general transformations on multipath qudits than is possible with a SLM alone operating in the diagonal basis of which-path states. Therefore, it will extend the range of applicability for this encoding in high-dimensional quantum information and computing protocols as well as fundamental studies in quantum theory.

  13. Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Chacon, Luis; Finn, John M.

    2008-11-01

    In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).

  14. Application of the piecewise rational quadratic interpolant to the AUC calculation in the bioavailability study.

    PubMed

    Akhter, Khalid P; Ahmad, Mahmood; Khan, Shujaat Ali; Ramzan, Munazza; Shafi, Ishrat; Muryam, Burhana; Javed, Zafar; Murtaza, Ghulam

    2012-01-01

    This study presents an application of the piecewise rational quadratic interpolant to the AUC calculation in the bioavailability study. The objective of this work is to find an area under the plasma concentration-time curve (AUC) for multiple doses of salbutamol sulfate sustained release tablets (Ventolin oral tablets SR 8 mg, GSK, Pakistan) in the group of 24 healthy adults by using computational mathematics techniques. Following the administration of 4 doses of Ventolin tablets 12 hourly to 24 healthy human subjects and bioanalysis of obtained plasma samples, plasma drug concentration-time profile was constructed. The approximated AUC was computed by using computational mathematics techniques such as extended rectangular, extended trapezium and extended Simpson's rule and compared with exact value of AUC calculated by using software - Kinetica to find best computational mathematics method that gives AUC values closest to exact. The exact values of AUC for four consecutive doses of Ventolin oral tablets were 150.58, 157.81, 164.41 and 162.78 ngxh/mL while the closest approximated AUC values were 149.24, 157.33, 164.25 and 162.28 ngxh/mL, respectively, as found by extended rectangular rule. The errors in the approximated values of AUC were negligible. It is concluded that all computational tools approximated values of AUC accurately but the extended rectangular rule gives slightly better approximated values of AUC as compared to extended trapezium and extended Simpson's rules.

  15. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  16. Repetitive Domain-Referenced Testing Using Computers: the TITA System.

    ERIC Educational Resources Information Center

    Olympia, P. L., Jr.

    The TITA (Totally Interactive Testing and Analysis) System algorithm for the repetitive construction of domain-referenced tests utilizes a compact data bank, is highly portable, is useful in any discipline, requires modest computer hardware, and does not present a security problem. Clusters of related keyphrases, statement phrases, and distractors…

  17. Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis

    ERIC Educational Resources Information Center

    Bodin, Madelen

    2012-01-01

    Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in

    Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.

  19. Extension theorems for homogenization on lattice structures

    NASA Technical Reports Server (NTRS)

    Miller, Robert E.

    1992-01-01

    When applying homogenization techniques to problems involving lattice structures, it is necessary to extend certain functions defined on a perforated domain to a simply connected domain. This paper provides general extension operators which preserve bounds on derivatives of order l. Only the special case of honeycomb structures is considered.

  20. The Interplay between Executive Control and Motor Functioning in Williams Syndrome

    ERIC Educational Resources Information Center

    Hocking, Darren R.; Thomas, Daniel; Menant, Jasmine C.; Porter, Melanie A.; Smith, Stuart; Lord, Stephen R.; Cornish, Kim M.

    2013-01-01

    Previous studies suggest that individuals with Williams syndrome (WS), a rare genetically based neurodevelopmental disorder, show specific weaknesses in visual attention and response inhibition within the visuospatial domain. Here we examine the extent to which impairments in attentional control extend to the visuomotor domain using a…

  1. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  2. Computational modeling of Repeat1 region of INI1/hSNF5: An evolutionary link with ubiquitin

    PubMed Central

    Bhutoria, Savita

    2016-01-01

    Abstract The structure of a protein can be very informative of its function. However, determining protein structures experimentally can often be very challenging. Computational methods have been used successfully in modeling structures with sufficient accuracy. Here we have used computational tools to predict the structure of an evolutionarily conserved and functionally significant domain of Integrase interactor (INI)1/hSNF5 protein. INI1 is a component of the chromatin remodeling SWI/SNF complex, a tumor suppressor and is involved in many protein‐protein interactions. It belongs to SNF5 family of proteins that contain two conserved repeat (Rpt) domains. Rpt1 domain of INI1 binds to HIV‐1 Integrase, and acts as a dominant negative mutant to inhibit viral replication. Rpt1 domain also interacts with oncogene c‐MYC and modulates its transcriptional activity. We carried out an ab initio modeling of a segment of INI1 protein containing the Rpt1 domain. The structural model suggested the presence of a compact and well defined ββαα topology as core structure in the Rpt1 domain of INI1. This topology in Rpt1 was similar to PFU domain of Phospholipase A2 Activating Protein, PLAA. Interestingly, PFU domain shares similarity with Ubiquitin and has ubiquitin binding activity. Because of the structural similarity between Rpt1 domain of INI1 and PFU domain of PLAA, we propose that Rpt1 domain of INI1 may participate in ubiquitin recognition or binding with ubiquitin or ubiquitin related proteins. This modeling study may shed light on the mode of interactions of Rpt1 domain of INI1 and is likely to facilitate future functional studies of INI1. PMID:27261671

  3. Computational modeling of Repeat1 region of INI1/hSNF5: An evolutionary link with ubiquitin.

    PubMed

    Bhutoria, Savita; Kalpana, Ganjam V; Acharya, Seetharama A

    2016-09-01

    The structure of a protein can be very informative of its function. However, determining protein structures experimentally can often be very challenging. Computational methods have been used successfully in modeling structures with sufficient accuracy. Here we have used computational tools to predict the structure of an evolutionarily conserved and functionally significant domain of Integrase interactor (INI)1/hSNF5 protein. INI1 is a component of the chromatin remodeling SWI/SNF complex, a tumor suppressor and is involved in many protein-protein interactions. It belongs to SNF5 family of proteins that contain two conserved repeat (Rpt) domains. Rpt1 domain of INI1 binds to HIV-1 Integrase, and acts as a dominant negative mutant to inhibit viral replication. Rpt1 domain also interacts with oncogene c-MYC and modulates its transcriptional activity. We carried out an ab initio modeling of a segment of INI1 protein containing the Rpt1 domain. The structural model suggested the presence of a compact and well defined ββαα topology as core structure in the Rpt1 domain of INI1. This topology in Rpt1 was similar to PFU domain of Phospholipase A2 Activating Protein, PLAA. Interestingly, PFU domain shares similarity with Ubiquitin and has ubiquitin binding activity. Because of the structural similarity between Rpt1 domain of INI1 and PFU domain of PLAA, we propose that Rpt1 domain of INI1 may participate in ubiquitin recognition or binding with ubiquitin or ubiquitin related proteins. This modeling study may shed light on the mode of interactions of Rpt1 domain of INI1 and is likely to facilitate future functional studies of INI1. © 2016 The Protein Society.

  4. Hypertext Interchange Using ICA.

    ERIC Educational Resources Information Center

    Rada, Roy; And Others

    1995-01-01

    Discusses extended ICA (Integrated Chameleon Architecture), a public domain toolset for generating text-to-hypertext translators. A system called SGML-MUCH has been developed using E-ICA (Extended Integrated Chameleon Architecture) and is presented as a case study with converters for the hypertext systems MUCH, Guide, Hyperties, and Toolbook.…

  5. Cognitive Neuroscience of Attention Deficit Hyperactivity Disorder: Current Status and Working Hypotheses

    ERIC Educational Resources Information Center

    Vaidya, Chandan J.; Stollstorff, Melanie

    2008-01-01

    Cognitive neuroscience studies of Attention Deficit Hyperactivity Disorder (ADHD) suggest multiple loci of pathology with respect to both cognitive domains and neural circuitry. Cognitive deficits extend beyond executive functioning to include spatial, temporal, and lower-level "nonexecutive" functions. Atypical functional anatomy extends beyond…

  6. Sinogram restoration in computed tomography with an edge-preserving penalty

    PubMed Central

    Little, Kevin J.; La Rivière, Patrick J.

    2015-01-01

    Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogates (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data. PMID:25735286

  7. Sinogram restoration in computed tomography with an edge-preserving penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, Kevin J., E-mail: little@uchicago.edu; La Rivière, Patrick J.

    2015-03-15

    Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogatesmore » (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data.« less

  8. Use of the Fracture Continuum Model for Numerical Modeling of Flow and Transport of Deep Geologic Disposal of Nuclear Waste in Crystalline Rock

    NASA Astrophysics Data System (ADS)

    Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.

    2015-12-01

    Numerical modeling of disposal of nuclear waste in a deep geologic repository in fractured crystalline rock requires robust characterization of fractures. Various methods for fracture representation in granitic rocks exist. In this study we used the fracture continuum model (FCM) to characterize fractured rock for use in the simulation of flow and transport in the far field of a generic nuclear waste repository located at 500 m depth. The FCM approach is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The method generates permeability fields using field observations of fracture sets. The original method described in McKenna and Reeves (2005) was designed for vertical fractures. The method has since then been extended to incorporate fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation (Kalinina et al. 20012, 2014). For this study the numerical code PFLOTRAN (Lichtner et al., 2015) has been used to model flow and transport. PFLOTRAN solves a system of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Benchmark tests were conducted to simulate flow and transport in a specified model domain. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the FCM method was used to generate a permeability field of the fractured rock. The PFLOTRAN code was then used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains.

  9. Towards human-computer synergetic analysis of large-scale biological data.

    PubMed

    Singh, Rahul; Yang, Hui; Dalziel, Ben; Asarnow, Daniel; Murad, William; Foote, David; Gormley, Matthew; Stillman, Jonathan; Fisher, Susan

    2013-01-01

    Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/.

  10. Towards human-computer synergetic analysis of large-scale biological data

    PubMed Central

    2013-01-01

    Background Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. Results In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. Conclusions The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/. PMID:24267485

  11. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.

  12. Scientific Reasoning across Different Domains.

    ERIC Educational Resources Information Center

    Glaser, Robert; And Others

    This study seeks to establish which scientific reasoning skills are primarily domain-general and which appear to be domain-specific. The subjects, 12 university undergraduates, each participated in self-directed experimentation with three different content domains. The experimentation contexts were computer-based laboratories in d.c. circuits…

  13. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.

  14. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  15. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    PubMed

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  16. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  17. Domain generality vs. modality specificity: The paradox of statistical learning

    PubMed Central

    Frost, Ram; Armstrong, Blair C.; Siegelman, Noam; Christiansen, Morten H.

    2015-01-01

    Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility. PMID:25631249

  18. Sequence Tolerance of a Highly Stable Single Domain Antibody: Comparison of Computational and Experimental Profiles

    DTIC Science & Technology

    2016-09-09

    evaluating 18 mutants using either the A or B conformer is only r = ~ 0.2. Given the poor performance of approximating the observed experimental ...1    Sequence Tolerance of a Highly Stable Single Domain Antibody: Comparison of Computational and Experimental Profiles Mark A. Olson,1 Patricia...unusually high thermal stability is explored by a combined computational and experimental study. Starting with the crystallographic structure

  19. Simulation of human decision making

    DOEpatents

    Forsythe, J Chris [Sandia Park, NM; Speed, Ann E [Albuquerque, NM; Jordan, Sabina E [Albuquerque, NM; Xavier, Patrick G [Albuquerque, NM

    2008-05-06

    A method for computer emulation of human decision making defines a plurality of concepts related to a domain and a plurality of situations related to the domain, where each situation is a combination of at least two of the concepts. Each concept and situation is represented in the computer as an oscillator output, and each situation and concept oscillator output is distinguishable from all other oscillator outputs. Information is input to the computer representative of detected concepts, and the computer compares the detected concepts with the stored situations to determine if a situation has occurred.

  20. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  1. Applying a Wearable Voice-Activated Computer to Instructional Applications in Clean Room Environments

    NASA Technical Reports Server (NTRS)

    Graves, Corey A.; Lupisella, Mark L.

    2004-01-01

    The use of wearable computing technology in restrictive environments related to space applications offers promise in a number of domains. The clean room environment is one such domain in which hands-free, heads-up, wearable computing is particularly attractive for education and training because of the nature of clean room work We have developed and tested a Wearable Voice-Activated Computing (WEVAC) system based on clean room applications. Results of this initial proof-of-concept work indicate that there is a strong potential for WEVAC to enhance clean room activities.

  2. The electronic patient record: a strategic planning framework.

    PubMed

    Gordon, D B; Marafioti, S; Carter, M; Kunov, H; Dolan, A

    1995-01-01

    Sunnybrook Health Science Center (Sunnybrook) is a multifacility academic teaching center. In May 1994, Sunnybrook struck an electronic patient record taskforce to develop a strategic plan for the implementation of a comprehensive, facility wide electronic patient record (EPR). The taskforce sought to create a conceptual framework which provides context and integrates decision-making related to the comprehensive electronic patient record. The EPR is very much broader in scope than the traditional paper-based record. It is not restricted to simply reporting individual patient data. By the Institute of Medicine's definition, the electronic patient record resides in a system specifically designed to support users through availability of complete and accurate data, practitioner reminders and alerts, clinical decision support systems, links to bodies of medical knowledge, and other aids [1]. It is a comprehensive resource for patient care. The taskforce proposed a three domain model for determining how the EPR affects Sunnybrook. The EPR enables Sunnybrook to have a high performance team structure (domain 1), to function as an integrated organization (domain 2), and to reach out and develop new relationships with external organizations to become an extended enterprise (domain 3) [2]. Domain 1: Sunnybrook's high performance teams or patient service units' (PSUs) are decentralized, autonomous operating units that provide care to patients grouped by 'like' diagnosis and resource needs. The EPR must provide functions and applications which promote patient focused care, such as cross functional charting and care maps, group scheduling, clinical email, and a range of enabling technologies for multiskilled workers. Domain 2: In the integrated organization domain, the EPR should facilitate closer linkages between the arrangement of PSUs into clinical teams and with other facilities within the center in order to provide a longitudinal record that covers a continuum of care. Domain 3: In the inter-enterprise domain, the EPR must allow for patient information to be exchanged with external providers including referring doctors, laboratories, and other hospitals via community health information networks (CHINs). Sunnybrook will prioritize the development of first domain functionality within the corporate constraints imposed by the integrated organization domain. Inter-enterprise computing will be less of a priority until Sunnybrook has developed a critical mass of the electronic patient record internally. The three domain description is a useful model for describing the relationship between the electronic patient record enabling technologies and the Sunnybrook organizational structures. The taskforce has used this model to determine EPR development guidelines and implementation priorities.

  3. Final Report - High-Order Spectral Volume Method for the Navier-Stokes Equations On Unstructured Tetrahedral Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z J

    2012-12-06

    The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less

  4. A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Zhuang, Yu

    1997-01-01

    In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.

  5. Relativistic extension of a charge-conservative finite element solver for time-dependent Maxwell-Vlasov equations

    NASA Astrophysics Data System (ADS)

    Na, D.-Y.; Moon, H.; Omelchenko, Y. A.; Teixeira, F. L.

    2018-01-01

    Accurate modeling of relativistic particle motion is essential for physical predictions in many problems involving vacuum electronic devices, particle accelerators, and relativistic plasmas. A local, explicit, and charge-conserving finite-element time-domain (FETD) particle-in-cell (PIC) algorithm for time-dependent (non-relativistic) Maxwell-Vlasov equations on irregular (unstructured) meshes was recently developed by Moon et al. [Comput. Phys. Commun. 194, 43 (2015); IEEE Trans. Plasma Sci. 44, 1353 (2016)]. Here, we extend this FETD-PIC algorithm to the relativistic regime by implementing and comparing three relativistic particle-pushers: (relativistic) Boris, Vay, and Higuera-Cary. We illustrate the application of the proposed relativistic FETD-PIC algorithm for the analysis of particle cyclotron motion at relativistic speeds, harmonic particle oscillation in the Lorentz-boosted frame, and relativistic Bernstein modes in magnetized charge-neutral (pair) plasmas.

  6. A model-driven approach to information security compliance

    NASA Astrophysics Data System (ADS)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  7. Modelling human problem solving with data from an online game.

    PubMed

    Rach, Tim; Kirsch, Alexandra

    2016-11-01

    Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.

  8. Adaptive memory: enhanced location memory after survival processing.

    PubMed

    Nairne, James S; Vanarsdall, Joshua E; Pandeirada, Josefa N S; Blunt, Janell R

    2012-03-01

    Two experiments investigated whether survival processing enhances memory for location. From an adaptive perspective, remembering that food has been located in a particular area, or that potential predators are likely to be found in a given territory, should increase the chances of subsequent survival. Participants were shown pictures of food or animals located at various positions on a computer screen. The task was to rate the ease of collecting the food or capturing the animals relative to a central fixation point. Surprise retention tests revealed that people remembered the locations of the items better when the collection or capturing task was described as relevant to survival. These data extend the generality of survival processing advantages to a new domain (location memory) by means of a task that does not involve rating the relevance of words to a scenario. 2012 APA, all rights reserved

  9. Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    2013-01-01

    We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less

  10. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  11. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  12. Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Cheng, Jiahao

    2018-02-01

    Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.

  13. Extended depth of focus adaptive optics spectral domain optical coherence tomography

    PubMed Central

    Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki

    2012-01-01

    We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA. PMID:23082278

  14. Extended depth of focus adaptive optics spectral domain optical coherence tomography.

    PubMed

    Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki

    2012-10-01

    We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA.

  15. Zipf’s Law Arises Naturally When There Are Underlying, Unobserved Variables

    PubMed Central

    Corradi, Nicola

    2016-01-01

    Zipf’s law, which states that the probability of an observation is inversely proportional to its rank, has been observed in many domains. While there are models that explain Zipf’s law in each of them, those explanations are typically domain specific. Recently, methods from statistical physics were used to show that a fairly broad class of models does provide a general explanation of Zipf’s law. This explanation rests on the observation that real world data is often generated from underlying causes, known as latent variables. Those latent variables mix together multiple models that do not obey Zipf’s law, giving a model that does. Here we extend that work both theoretically and empirically. Theoretically, we provide a far simpler and more intuitive explanation of Zipf’s law, which at the same time considerably extends the class of models to which this explanation can apply. Furthermore, we also give methods for verifying whether this explanation applies to a particular dataset. Empirically, these advances allowed us extend this explanation to important classes of data, including word frequencies (the first domain in which Zipf’s law was discovered), data with variable sequence length, and multi-neuron spiking activity. PMID:27997544

  16. Cell Adhesion Molecule L1 in Folded (Horseshoe) and Extended Conformations

    PubMed Central

    Schürmann, Gregor; Haspel, Jeffrey; Grumet, Martin; Erickson, Harold P.

    2001-01-01

    We have investigated the structure of the cell adhesion molecule L1 by electron microscopy. We were particularly interested in the conformation of the four N-terminal immunoglobulin domains, because x-ray diffraction showed that these domains are bent into a horseshoe shape in the related molecules hemolin and axonin-1. Surprisingly, rotary-shadowed specimens showed the molecules to be elongated, with no indication of the horseshoe shape. However, sedimentation data suggested that these domains of L1 were folded into a compact shape in solution; therefore, this prompted us to look at the molecules by an alternative technique, negative stain. The negative stain images showed a compact shape consistent with the expected horseshoe conformation. We speculate that in rotary shadowing the contact with the mica caused a distortion of the protein, weakening the bonds forming the horseshoe and permitting the molecule to extend. We have thus confirmed that the L1 molecule is primarily in the horseshoe conformation in solution, and we have visualized for the first time its opening into an extended conformation. Our study resolves conflicting interpretations from previous electron microscopy studies of L1. PMID:11408583

  17. Artificial proteins as allosteric modulators of PDZ3 and SH3 in two-domain constructs: A computational characterization of novel chimeric proteins.

    PubMed

    Kirubakaran, Palani; Pfeiferová, Lucie; Boušová, Kristýna; Bednarova, Lucie; Obšilová, Veronika; Vondrášek, Jiří

    2016-10-01

    Artificial multidomain proteins with enhanced structural and functional properties can be utilized in a broad spectrum of applications. The design of chimeric fusion proteins utilizing protein domains or one-domain miniproteins as building blocks is an important advancement for the creation of new biomolecules for biotechnology and medical applications. However, computational studies to describe in detail the dynamics and geometry properties of two-domain constructs made from structurally and functionally different proteins are lacking. Here, we tested an in silico design strategy using all-atom explicit solvent molecular dynamics simulations. The well-characterized PDZ3 and SH3 domains of human zonula occludens (ZO-1) (3TSZ), along with 5 artificial domains and 2 types of molecular linkers, were selected to construct chimeric two-domain molecules. The influence of the artificial domains on the structure and dynamics of the PDZ3 and SH3 domains was determined using a range of analyses. We conclude that the artificial domains can function as allosteric modulators of the PDZ3 and SH3 domains. Proteins 2016; 84:1358-1374. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Direct observation of interlocked domain walls and topological four-state vortex-like domain patterns in multiferroic YMnO{sub 3} single crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Lei; School of Materials Science and Engineering, Dalian Jiaotong University, Dalian, Liaoning 116028; Wang, Yumei, E-mail: wangym@iphy.ac.cn

    2015-03-16

    Using the advanced spherical aberration-corrected high angle annular dark field scanning transmission electron microscope imaging techniques, we investigated atomic-scale structural features of domain walls and domain patterns in YMnO{sub 3} single crystal. Three different types of interlocked ferroelectric-antiphase domain walls and two abnormal topological four-state vortex-like domain patterns are identified. Each ferroelectric domain wall is accompanied by a translation vector, i.e., 1/6[210] or −1/6[210], demonstrating its interlocked nature. Different from the four-state vortex domain patterns caused by a partial edge dislocation, two four-state vortex-like domain configurations have been obtained at atomic level. These observed phenomena can further extend our understandingmore » of the fascinating vortex domain patterns in multiferroic hexagonal rare-earth manganites.« less

  19. The Davey-Stewartson Equation on the Half-Plane

    NASA Astrophysics Data System (ADS)

    Fokas, A. S.

    2009-08-01

    The Davey-Stewartson (DS) equation is a nonlinear integrable evolution equation in two spatial dimensions. It provides a multidimensional generalisation of the celebrated nonlinear Schrödinger (NLS) equation and it appears in several physical situations. The implementation of the Inverse Scattering Transform (IST) to the solution of the initial-value problem of the NLS was presented in 1972, whereas the analogous problem for the DS equation was solved in 1983. These results are based on the formulation and solution of certain classical problems in complex analysis, namely of a Riemann Hilbert problem (RH) and of either a d-bar or a non-local RH problem respectively. A method for solving the mathematically more complicated but physically more relevant case of boundary-value problems for evolution equations in one spatial dimension, like the NLS, was finally presented in 1997, after interjecting several novel ideas to the panoply of the IST methodology. Here, this method is further extended so that it can be applied to evolution equations in two spatial dimensions, like the DS equation. This novel extension involves several new steps, including the formulation of a d-bar problem for a sectionally non-analytic function, i.e. for a function which has different non-analytic representations in different domains of the complex plane. This, in addition to the computation of a d-bar derivative, also requires the computation of the relevant jumps across the different domains. This latter step has certain similarities (but is more complicated) with the corresponding step for those initial-value problems in two dimensions which can be solved via a non-local RH problem, like KPI.

  20. Application of transient CFD-procedures for S-shape computation in pump-turbines with and without FSI

    NASA Astrophysics Data System (ADS)

    Casartelli, E.; Mangani, L.; Ryan, O.; Schmid, A.

    2016-11-01

    CFD has entered the product development process in hydraulic machines since more than three decades. Beside the actual design process, in which the most appropriate geometry for a certain task is iteratively sought, several steady-state simulations and related analyses are performed with the help of CFD. Basic transient CFD-analysis is becoming more and more routine for rotor-stator interaction assessment, but in general unsteady CFD is still not standard due to the large computational effort. Especially for FSI simulations, where mesh motion is involved, a considerable amount of computational time is necessary for the mesh handling and deformation as well as the related unsteady flow field resolution. Therefore this kind of CFD computations are still unusual and mostly performed during trouble-shooting analysis rather than in the standard development process, i.e. in order to understand what went wrong instead of preventing failure or even better to increase the available knowledge. In this paper the application of an efficient and particularly robust algorithm for fast computations with moving mesh is presented for the analysis of transient effects encountered during highly dynamic procedures in the operation of a pump-turbine, like runaway at fixed GV position and load-rejection with GV motion imposed as one-way FSI. In both cases the computations extend through the S-shape of the machine in the turbine-brake and reverse pump domain, showing that such exotic computations can be perform on a more regular base, even if quite time consuming. Beside the presentation of the procedure and global results, some highlights in the encountered flow-physics are also given.

  1. Extending Clause Learning of SAT Solvers with Boolean Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Zengler, Christoph; Küchlin, Wolfgang

    We extend clause learning as performed by most modern SAT Solvers by integrating the computation of Boolean Gröbner bases into the conflict learning process. Instead of learning only one clause per conflict, we compute and learn additional binary clauses from a Gröbner basis of the current conflict. We used the Gröbner basis engine of the logic package Redlog contained in the computer algebra system Reduce to extend the SAT solver MiniSAT with Gröbner basis learning. Our approach shows a significant reduction of conflicts and a reduction of restarts and computation time on many hard problems from the SAT 2009 competition.

  2. Knowledge Discovery in Chess Using an Aesthetics Approach

    ERIC Educational Resources Information Center

    Iqbal, Azlan

    2012-01-01

    Computational aesthetics is a relatively new subfield of artificial intelligence (AI). It includes research that enables computers to "recognize" (and evaluate) beauty in various domains such as visual art, music, and games. Aside from the benefit this gives to humans in terms of creating and appreciating art in these domains, there are perhaps…

  3. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  4. Delay differential analysis of time series.

    PubMed

    Lainscsek, Claudia; Sejnowski, Terrence J

    2015-03-01

    Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.

  5. Examining Explanatory Biases in Young Children's Biological Reasoning

    ERIC Educational Resources Information Center

    Legare, Cristine H.; Gelman, Susan A.

    2014-01-01

    Despite the well-established literature on explanation in early childhood, little is known about what constrains children's explanations. State change and negative outcomes were examined as potential explanatory biases in the domain of naïve biology, extending upon previous work in the domain of naïve physics. In two studies, preschool children…

  6. Spatial-temporal dynamics of Newtonian and viscoelastic turbulence in channel flow

    NASA Astrophysics Data System (ADS)

    Wang, Sung-Ning; Shekar, Ashwin; Graham, Michael

    2016-11-01

    Introducing a trace amount of polymer into liquid turbulent flows can result in substantial reduction of friction drag. This phenomenon has been widely used in fluid transport; however, the mechanism is not well understood. Past studies have found that in minimal domain turbulent simulations, there areoccasional time periods when flow exhibits features such as weaker vortices, lower friction drag and larger log-law slope; these have been denoted as "hibernatingturbulence". Here we address the question of whether similar behavior arises spatio-temporally in extended domains, focusing on turbulence at friction Reynolds numbers near transition and Weissenberg numbers resulting in low-medium drag reduction. By using image analysis and conditional sampling tools, we identify the hibernating states in extended domains and show that they display striking similarity as those in minimal domains. The hibernating states among different Weissenberg numbers exhibit similar flow statistics, suggesting they are unaltered by low to medium viscoelasticity. In addition, the polymer is much less stretched during hibernation. Finally, these hibernating states vanish as Reynolds number increases. However, they reoccur and gradually become dominant with increasing viscoelasticity.

  7. Cognitive reserve is not associated with improved performance in all cognitive domains.

    PubMed

    Lavrencic, Louise M; Churches, Owen F; Keage, Hannah A D

    2017-06-08

    Cognitive reserve beneficially affects cognitive performance, even into advanced age. However, the benefits afforded by high cognitive reserve may not extend to all cognitive domains. This study investigated whether cognitive reserve differentially affects performance on cognitive tasks, in 521 cognitively healthy individuals aged 60 to 98 years (Mage = 68, SD = 6.22, 287 female); years of education was used to index cognitive reserve. Cognitive performance variables assessed attention, executive functions, verbal memory, motor performance, orientation, perception of emotion, processing speed, and working memory. Bootstrapped regression analyses revealed that cognitive reserve was associated with attention, executive functions, verbal and working memory, and orientation; and not significantly related to emotion perception, processing speed, or motor performance. Cognitive reserve appears to differentially affect individual cognitive domains, which extends current theory that purports benefits for all domains. This finding highlights the possibility of using tests not (or minimally) associated with cognitive reserve, to screen for cognitive impairment and dementia in late life; these tests will likely best track brain health, free of compensatory neural mechanisms.

  8. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  9. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  10. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  11. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  12. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  13. 3D Vectorial Time Domain Computational Integrated Photonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Bond, T C; Koning, J M

    2007-02-16

    The design of integrated photonic structures poses considerable challenges. 3D-Time-Domain design tools are fundamental in enabling technologies such as all-optical logic, photonic bandgap sensors, THz imaging, and fast radiation diagnostics. Such technologies are essential to LLNL and WFO sponsors for a broad range of applications: encryption for communications and surveillance sensors (NSA, NAI and IDIV/PAT); high density optical interconnects for high-performance computing (ASCI); high-bandwidth instrumentation for NIF diagnostics; micro-sensor development for weapon miniaturization within the Stockpile Stewardship and DNT programs; and applications within HSO for CBNP detection devices. While there exist a number of photonics simulation tools on the market,more » they primarily model devices of interest to the communications industry. We saw the need to extend our previous software to match the Laboratory's unique emerging needs. These include modeling novel material effects (such as those of radiation induced carrier concentrations on refractive index) and device configurations (RadTracker bulk optics with radiation induced details, Optical Logic edge emitting lasers with lateral optical inputs). In addition we foresaw significant advantages to expanding our own internal simulation codes: parallel supercomputing could be incorporated from the start, and the simulation source code would be accessible for modification and extension. This work addressed Engineering's Simulation Technology Focus Area, specifically photonics. Problems addressed from the Engineering roadmap of the time included modeling the Auston switch (an important THz source/receiver), modeling Vertical Cavity Surface Emitting Lasers (VCSELs, which had been envisioned as part of fast radiation sensors), and multi-scale modeling of optical systems (for a variety of applications). We proposed to develop novel techniques to numerically solve the 3D multi-scale propagation problem for both the microchip laser logic devices as well as devices characterized by electromagnetic (EM) propagation in nonlinear materials with time-varying parameters. The deliverables for this project were extended versions of the laser logic device code Quench2D and the EM propagation code EMsolve with new modules containing the novel solutions incorporated by taking advantage of the existing software interface and structured computational modules. Our approach was multi-faceted since no single methodology can always satisfy the tradeoff between model runtime and accuracy requirements. We divided the problems to be solved into two main categories: those that required Full Wave Methods and those that could be modeled using Approximate Methods. Full Wave techniques are useful in situations where Maxwell's equations are not separable (or the problem is small in space and time), while approximate techniques can treat many of the remaining cases.« less

  14. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  15. Persistence of evolutionary memory: primordial six-transmembrane helical domain mu opiate receptors selectively linked to endogenous morphine signaling.

    PubMed

    Kream, Richard M; Sheehan, Melinda; Cadet, Patrick; Mantione, Kirk J; Zhu, Wei; Casares, Federico; Stefano, George B

    2007-12-01

    Biochemical, molecular and pharmacological evidence for two unique six-transmembrane helical (TMH) domain opiate receptors expressed from the micro opioid receptor (MOR) gene have been shown. Designated micro3 and micro4 receptors, both protein species are Class A rhodopsin-like members of the superfamily of G-protein coupled receptors but are selectively tailored to mediate the cellular regulatory effects of endogenous morphine and related morphinan alkaloids via stimulation of nitric oxide (NO) production and release. Both micro3 and micro4 receptors lack an amino acid sequence of approximately 90 amino acids that constitute the extracellular N-terminal and TMH1 domains and part of the first intracellular loop of the micro1 receptor, but retain the empirically defined ligand binding pocket distributed across conserved TMH2, TMH3, and TMH7 domains of the micro1 sequence. Additionally, the receptor proteins are terminated by unique intracellular C-terminal amino acid sequences that serve as putative coupling or docking domains required for constitutive NO synthase activation. Because the recognition profile of micro3 and micro4 receptors is restricted to rigid benzylisoquinoline alkaloids typified by morphine and its extended family of chemical congeners, it is hypothesized that conformational stabilization provided by interaction of extended extracellular N-terminal protein domains and the extracellular loops is required for binding of endogenous opioid peptides as well as synthetic flexible opiate alkaloids.

  16. Socially Extended Cognition and Shared Intentionality

    PubMed Central

    Lyre, Holger

    2018-01-01

    The paper looks at the intersection of extended cognition and social cognition. The central claim is that the mechanisms of shared intentionality can equally be considered as coupling mechanisms of cognitive extension into the social domain. This claim will be demonstrated by investigating a detailed example of cooperative action, and it will be argued that such cases imply that socially extended cognition is not only about cognitive vehicles, but that content must additionally be taken into account. It is finally outlined how social content externalism can in principle be grounded in socially extended cognition. PMID:29892254

  17. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    PubMed Central

    2010-01-01

    Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480

  18. Interest point detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dorado-Muñoz, Leidy P.; Vélez-Reyes, Miguel; Roysam, Badrinath; Mukherjee, Amit

    2009-05-01

    This paper presents an algorithm for automated extraction of interest points (IPs)in multispectral and hyperspectral images. Interest points are features of the image that capture information from its neighbours and they are distinctive and stable under transformations such as translation and rotation. Interest-point operators for monochromatic images were proposed more than a decade ago and have since been studied extensively. IPs have been applied to diverse problems in computer vision, including image matching, recognition, registration, 3D reconstruction, change detection, and content-based image retrieval. Interest points are helpful in data reduction, and reduce the computational burden of various algorithms (like registration, object detection, 3D reconstruction etc) by replacing an exhaustive search over the entire image domain by a probe into a concise set of highly informative points. An interest operator seeks out points in an image that are structurally distinct, invariant to imaging conditions, stable under geometric transformation, and interpretable which are good candidates for interest points. Our approach extends ideas from Lowe's keypoint operator that uses local extrema of Difference of Gaussian (DoG) operator at multiple scales to detect interest point in gray level images. The proposed approach extends Lowe's method by direct conversion of scalar operations such as scale-space generation, and extreme point detection into operations that take the vector nature of the image into consideration. Experimental results with RGB and hyperspectral images which demonstrate the potential of the method for this application and the potential improvements of a fully vectorial approach over band-by-band approaches described in the literature.

  19. Numerical Methods for Forward and Inverse Problems in Discontinuous Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartier, Timothy P.

    The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise tomore » medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.« less

  20. Formatting scripts with computers and Extended BASIC.

    PubMed

    Menning, C B

    1984-02-01

    A computer program, written in the language of Extended BASIC, is presented which enables scripts, for educational media, to be quickly written in a nearly unformatted style. From the resulting script file, stored on magnetic tape or disk, the computer program formats the script into either a storyboard , a presentation, or a narrator 's script. Script headings and page and paragraph numbers are automatic features in the word processing. Suggestions are given for making personal modifications to the computer program.

  1. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  2. A Delphi Study on Technology Enhanced Learning (TEL) Applied on Computer Science (CS) Skills

    ERIC Educational Resources Information Center

    Porta, Marcela; Mas-Machuca, Marta; Martinez-Costa, Carme; Maillet, Katherine

    2012-01-01

    Technology Enhanced Learning (TEL) is a new pedagogical domain aiming to study the usage of information and communication technologies to support teaching and learning. The following study investigated how this domain is used to increase technical skills in Computer Science (CS). A Delphi method was applied, using three-rounds of online survey…

  3. Impurity distribution and microstructure of Ga-doped ZnO films grown by molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Kvit, A. V.; Yankovich, A. B.; Avrutin, V.; Liu, H.; Izyumskaya, N.; Özgür, Ü.; Morkoç, H.; Voyles, P. M.

    2012-12-01

    We report microstructural characterization of heavily Ga-doped ZnO (GZO) thin films on GaN and sapphire by aberration-corrected scanning transmission electron microscopy. Growth under oxygen-rich and metal-rich growth conditions leads to changes in the GZO polarity and different extended defects. For GZO layers on sapphire, the primary extended defects are voids, inversion domain boundaries, and low-angle grain boundaries. Ga doping of ZnO grown under metal-rich conditions causes a switch from pure oxygen polarity to mixed oxygen and zinc polarity in small domains. Electron energy loss spectroscopy and energy dispersive spectroscopy spectrum imaging show that Ga is homogeneous, but other residual impurities tend to accumulate at the GZO surface and at extended defects. GZO grown on GaN on c-plane sapphire has Zn polarity and no voids. There are misfit dislocations at the interfaces between GZO and an undoped ZnO buffer layer and at the buffer/GaN interface. Low-angle grain boundaries are the only threading microstructural defects. The potential effects of different extended defects and impurity distributions on free carrier scattering are discussed.

  4. Using Molecular Dynamics Simulations as an Aid in the Prediction of Domain Swapping of Computationally Designed Protein Variants.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Thomas, Leonard M; Mayo, Stephen L

    2015-08-14

    In standard implementations of computational protein design, a positive-design approach is used to predict sequences that will be stable on a given backbone structure. Possible competing states are typically not considered, primarily because appropriate structural models are not available. One potential competing state, the domain-swapped dimer, is especially compelling because it is often nearly identical with its monomeric counterpart, differing by just a few mutations in a hinge region. Molecular dynamics (MD) simulations provide a computational method to sample different conformational states of a structure. Here, we tested whether MD simulations could be used as a post-design screening tool to identify sequence mutations leading to domain-swapped dimers. We hypothesized that a successful computationally designed sequence would have backbone structure and dynamics characteristics similar to that of the input structure and that, in contrast, domain-swapped dimers would exhibit increased backbone flexibility and/or altered structure in the hinge-loop region to accommodate the large conformational change required for domain swapping. While attempting to engineer a homodimer from a 51-amino-acid fragment of the monomeric protein engrailed homeodomain (ENH), we had instead generated a domain-swapped dimer (ENH_DsD). MD simulations on these proteins showed increased B-factors derived from MD simulation in the hinge loop of the ENH_DsD domain-swapped dimer relative to monomeric ENH. Two point mutants of ENH_DsD designed to recover the monomeric fold were then tested with an MD simulation protocol. The MD simulations suggested that one of these mutants would adopt the target monomeric structure, which was subsequently confirmed by X-ray crystallography. Copyright © 2015. Published by Elsevier Ltd.

  5. Periodic Time-Domain Nonlocal Nonreflecting Boundary Conditions for Duct Acoustics

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Zorumski, William E.

    1996-01-01

    Periodic time-domain boundary conditions are formulated for direct numerical simulation of acoustic waves in ducts without flow. Well-developed frequency-domain boundary conditions are transformed into the time domain. The formulation is presented here in one space dimension and time; however, this formulation has an advantage in that its extension to variable-area, higher dimensional, and acoustically treated ducts is rigorous and straightforward. The boundary condition simulates a nonreflecting wave field in an infinite uniform duct and is implemented by impulse-response operators that are applied at the boundary of the computational domain. These operators are generated by convolution integrals of the corresponding frequency-domain operators. The acoustic solution is obtained by advancing the Euler equations to a periodic state with the MacCormack scheme. The MacCormack scheme utilizes the boundary condition to limit the computational space and preserve the radiation boundary condition. The success of the boundary condition is attributed to the fact that it is nonreflecting to periodic acoustic waves. In addition, transient waves can pass rapidly out of the solution domain. The boundary condition is tested for a pure tone and a multitone source in a linear setting. The effects of various initial conditions are assessed. Computational solutions with the boundary condition are consistent with the known solutions for nonreflecting wave fields in an infinite uniform duct.

  6. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  7. Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires

    NASA Astrophysics Data System (ADS)

    Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.

    2016-04-01

    Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.

  8. Modeling software systems by domains

    NASA Technical Reports Server (NTRS)

    Dippolito, Richard; Lee, Kenneth

    1992-01-01

    The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.

  9. Discussion summary: Fictitious domain methods

    NASA Technical Reports Server (NTRS)

    Glowinski, Rowland; Rodrigue, Garry

    1991-01-01

    Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.

  10. A mechanistic role of Helix 8 in GPCRs: Computational modeling of the dopamine D2 receptor interaction with the GIPC1-PDZ-domain.

    PubMed

    Sensoy, Ozge; Weinstein, Harel

    2015-04-01

    Helix-8 (Hx8) is a structurally conserved amphipathic helical motif in class-A GPCRs, adjacent to the C-terminal sequence that is responsible for PDZ-domain-recognition. The Hx8 segment in the dopamine D2 receptor (D2R) constitutes the C-terminal segment and we investigate its role in the function of D2R by studying the interaction with the PDZ-containing GIPC1 using homology models based on the X-ray structures of very closely related analogs: the D3R for the D2R model, and the PDZ domain of GIPC2 for GIPC1-PDZ. The mechanism of this interaction was investigated with all-atom unbiased molecular dynamics (MD) simulations that reveal the role of the membrane in maintaining the helical fold of Hx8, and with biased MD simulations to elucidate the energy drive for the interaction with the GIPC1-PDZ. We found that it becomes more favorable energetically for Hx8 to adopt the extended conformation observed in all PDZ-ligand complexes when it moves away from the membrane, and that C-terminus palmitoylation of D2R enhanced membrane penetration by the Hx8 backbone. De-palmitoylation enables Hx8 to move out into the aqueous environment for interaction with the PDZ domain. All-atom unbiased MD simulations of the full D2R-GIPC1-PDZ complex in sphingolipid/cholesterol membranes show that the D2R carboxyl C-terminus samples the region of the conserved GFGL motif located on the carboxylate-binding loop of the GIPC1-PDZ, and the entire complex distances itself from the membrane interface. Together, these results outline a likely mechanism of Hx8 involvement in the interaction of the GPCR with PDZ-domains in the course of signaling. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. A mechanistic role of Helix 8 in GPCRs: Computational modeling of the dopamine D2 receptor interaction with the GIPC1-PDZ-domain

    PubMed Central

    Sensoy, Ozge; Weinstein, Harel

    2015-01-01

    Helix-8 (Hx8) is a structurally conserved amphipathic helical motif in class-A GPCRs, adjacent to the C-terminal sequence that is responsible for PDZ-domain-recognition. The Hx8 segment in the dopamine D2 receptor (D2R) constitutes the C-terminal segment and we investigate its role in the function of D2R by studying the interaction with the PDZ-containing GIPC1 using homology models based on the X-ray structures of very closely related analogs: the D3R for the D2R model, and the PDZ domain of GIPC2 for GIPC1-PDZ. The mechanism of this interaction was investigated with all-atom unbiased molecular dynamics (MD) simulations that reveal the role of the membrane in maintaining the helical fold of Hx8, and with biased MD simulations to elucidate the energy drive for the interaction with the GIPC1-PDZ. We found that it becomes more favorable energetically for Hx8 to adopt the extended conformation observed in all PDZ-ligand complexes when it moves away from the membrane, and that C-terminus palmitoylation of D2R enhanced membrane penetration by the Hx8 backbone. De-palmitoylation enables Hx8 to move out into the aqueous environment for interaction with the PDZ domain. All-atom unbiased MD simulations of the full D2R-GIPC1 complex in sphingolipid/cholesterol membranes shows that the D2R carboxyl C-terminus samples the region of the conserved GFGL motif located on the carboxylate-binding loop of the GIPC1-PDZ, and the entire complex distances itself from the membrane interface. Together, these results outline a likely mechanism of Hx8 involvement in the interaction of the GPCR with PDZ-domains in the course of signaling. PMID:25592838

  12. A mechanistic role of Helix 8 in GPCRs: Computational modeling of the dopamine D2 receptor interaction with the GIPC1–PDZ-domain

    DOE PAGES

    Sensoy, Ozge; Weinstein, Harel

    2015-01-12

    Helix-8 (Hx8) is a structurally conserved amphipathic helical motif in class-A GPCRs, adjacent to the C-terminal sequence that is responsible for PDZ-domain-recognition. The Hx8 segment in the dopamine D2 receptor (D2R) constitutes the C-terminal segment and we investigate its role in the function of D2R by studying the interaction with the PDZ-containing GIPC1 using homology models based on the X-ray structures of very closely related analogs: the D3R for the D2R model, and the PDZ domain of GIPC2 for GIPC1–PDZ. The mechanism of this interaction was investigated with all-atom unbiased molecular dynamics (MD) simulations that reveal the role of themore » membrane in maintaining the helical fold of Hx8, and with biased MD simulations to elucidate the energy drive for the interaction with the GIPC1–PDZ. We found that it becomes more favorable energetically for Hx8 to adopt the extended conformation observed in all PDZ–ligand complexes when it moves away from the membrane, and that C-terminus palmitoylation of D2R enhanced membrane penetration by the Hx8 backbone. De-palmitoylation enables Hx8 to move out into the aqueous environment for interaction with the PDZ domain. All-atom unbiased MD simulations of the full D2R–GIPC1-PDZ complex in sphingolipid/cholesterol membranes show that the D2R carboxyl C-terminus samples the region of the conserved GFGL motif located on the carboxylate-binding loop of the GIPC1–PDZ, and the entire complex distances itself from the membrane interface. Altogether, these results outline a likely mechanism of Hx8 involvement in the interaction of the GPCR with PDZ-domains in the course of signaling.« less

  13. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  14. PDON: Parkinson's disease ontology for representation and modeling of the Parkinson's disease knowledge domain.

    PubMed

    Younesi, Erfan; Malhotra, Ashutosh; Gündel, Michaela; Scordis, Phil; Kodamullil, Alpha Tom; Page, Matt; Müller, Bernd; Springstubbe, Stephan; Wüllner, Ullrich; Scheller, Dieter; Hofmann-Apitius, Martin

    2015-09-22

    Despite the unprecedented and increasing amount of data, relatively little progress has been made in molecular characterization of mechanisms underlying Parkinson's disease. In the area of Parkinson's research, there is a pressing need to integrate various pieces of information into a meaningful context of presumed disease mechanism(s). Disease ontologies provide a novel means for organizing, integrating, and standardizing the knowledge domains specific to disease in a compact, formalized and computer-readable form and serve as a reference for knowledge exchange or systems modeling of disease mechanism. The Parkinson's disease ontology was built according to the life cycle of ontology building. Structural, functional, and expert evaluation of the ontology was performed to ensure the quality and usability of the ontology. A novelty metric has been introduced to measure the gain of new knowledge using the ontology. Finally, a cause-and-effect model was built around PINK1 and two gene expression studies from the Gene Expression Omnibus database were re-annotated to demonstrate the usability of the ontology. The Parkinson's disease ontology with a subclass-based taxonomic hierarchy covers the broad spectrum of major biomedical concepts from molecular to clinical features of the disease, and also reflects different views on disease features held by molecular biologists, clinicians and drug developers. The current version of the ontology contains 632 concepts, which are organized under nine views. The structural evaluation showed the balanced dispersion of concept classes throughout the ontology. The functional evaluation demonstrated that the ontology-driven literature search could gain novel knowledge not present in the reference Parkinson's knowledge map. The ontology was able to answer specific questions related to Parkinson's when evaluated by experts. Finally, the added value of the Parkinson's disease ontology is demonstrated by ontology-driven modeling of PINK1 and re-annotation of gene expression datasets relevant to Parkinson's disease. Parkinson's disease ontology delivers the knowledge domain of Parkinson's disease in a compact, computer-readable form, which can be further edited and enriched by the scientific community and also to be used to construct, represent and automatically extend Parkinson's-related computable models. A practical version of the Parkinson's disease ontology for browsing and editing can be publicly accessed at http://bioportal.bioontology.org/ontologies/PDON .

  15. Decisons To Promote or Hire: The Case of University Administrative Appointments. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Johnsrud, Linda K.; Sagaria, Mary Ann D.

    Internal labor market theory is extended to identify market domains that influence administrative staffing decisions, and a theoretical predictive model about the role of market domains in decisions to promote or hire is proposed and tested. Information is presented as follows: theoretical framework; labor markets within higher education; and the…

  16. Impact of Expectancy-Value and Situational Interest Motivation Specificity on Physical Education Outcomes

    ERIC Educational Resources Information Center

    Ding, Haiyong; Sun, Haichun; Chen, Ang

    2013-01-01

    To be successful in learning, students need to be motivated to engage and learn. The domain-specificity motivation theory articulates that student motivation is often determined by the content being taught to them. The purpose of this study was to extend the theory by determining domain-specificity of situational interest and expectancy-value…

  17. The Degrees of Freedom Concept--Extending the Domain

    ERIC Educational Resources Information Center

    Biernacki, J. J.

    2016-01-01

    The degrees of freedom (DOF) concept is a powerful tool that has been taught since at least the '70s in undergraduate curriculum, typically introduced in the context of a first course on material and energy balances. The concept, however, has not been widely applied beyond the material balance domain and in general is not taught as a unified…

  18. Recovering the Role of Reasoning in Moral Education to Address Inequity and Social Justice

    ERIC Educational Resources Information Center

    Nucci, Larry

    2016-01-01

    This article reasserts the centrality of reasoning as the focus for moral education. Attention to moral cognition must be extended to incorporate sociogenetic processes in moral growth. Moral education is not simply growth within the moral domain, but addresses capacities of students to engage in cross-domain coordination. Development beyond…

  19. Modeling Expert Behavior in Support of an Adaptive Psychomotor Training Environment: A Marksmanship Use Case

    ERIC Educational Resources Information Center

    Goldberg, Benjamin; Amburn, Charles; Ragusa, Charlie; Chen, Dar-Wei

    2018-01-01

    The U.S. Army is interested in extending the application of intelligent tutoring systems (ITS) beyond cognitive problem spaces and into psychomotor skill domains. In this paper, we present a methodology and validation procedure for creating expert model representations in the domain of rifle marksmanship. GIFT (Generalized Intelligent Framework…

  20. Confocal Microscopy-Based Estimation of Parameters for Computational Modeling of Electrical Conduction in the Normal and Infarcted Heart.

    PubMed

    Greiner, Joachim; Sankarankutty, Aparna C; Seemann, Gunnar; Seidel, Thomas; Sachse, Frank B

    2018-01-01

    Computational modeling is an important tool to advance our knowledge on cardiac diseases and their underlying mechanisms. Computational models of conduction in cardiac tissues require identification of parameters. Our knowledge on these parameters is limited, especially for diseased tissues. Here, we assessed and quantified parameters for computational modeling of conduction in cardiac tissues. We used a rabbit model of myocardial infarction (MI) and an imaging-based approach to derive the parameters. Left ventricular tissue samples were obtained from fixed control hearts (animals: 5) and infarcted hearts (animals: 6) within 200 μm (region 1), 250-750 μm (region 2) and 1,000-1,250 μm (region 3) of the MI border. We assessed extracellular space, fibroblasts, smooth muscle cells, nuclei and gap junctions by a multi-label staining protocol. With confocal microscopy we acquired three-dimensional (3D) image stacks with a voxel size of 200 × 200 × 200 nm. Image segmentation yielded 3D reconstructions of tissue microstructure, which were used to numerically derive extracellular conductivity tensors. Volume fractions of myocyte, extracellular, interlaminar cleft, vessel and fibroblast domains in control were (in %) 65.03 ± 3.60, 24.68 ± 3.05, 3.95 ± 4.84, 7.71 ± 2.15, and 2.48 ± 1.11, respectively. Volume fractions in regions 1 and 2 were different for myocyte, myofibroblast, vessel, and extracellular domains. Fibrosis, defined as increase in fibrotic tissue constituents, was (in %) 21.21 ± 1.73, 16.90 ± 9.86, and 3.58 ± 8.64 in MI regions 1, 2, and 3, respectively. For control tissues, image-based computation of longitudinal, transverse and normal extracellular conductivity yielded (in S/m) 0.36 ± 0.11, 0.17 ± 0.07, and 0.1 ± 0.06, respectively. Conductivities were markedly increased in regions 1 ( + 75 , + 171, and + 100%), 2 ( + 53 , + 165, and + 80%), and 3 ( + 42 , + 141, and + 60%) . Volume fractions of the extracellular space including interlaminar clefts strongly correlated with conductivities in control and MI hearts. Our study provides novel quantitative data for computational modeling of conduction in normal and MI hearts. Notably, our study introduces comprehensive statistical information on tissue composition and extracellular conductivities on a microscopic scale in the MI border zone. We suggest that the presented data fill a significant gap in modeling parameters and extend our foundation for computational modeling of cardiac conduction.

  1. Unrealistic optimism in advice taking: A computational account.

    PubMed

    Leong, Yuan Chang; Zaki, Jamil

    2018-02-01

    Expert advisors often make surprisingly inaccurate predictions about the future, yet people heed their suggestions nonetheless. Here we provide a novel, computational account of this unrealistic optimism in advice taking. Across 3 studies, participants observed as advisors predicted the performance of a stock. Advisors varied in their accuracy, performing reliably above, at, or below chance. Despite repeated feedback, participants exhibited inflated perceptions of advisors' accuracy, and reliably "bet" on advisors' predictions more than their performance warranted. Participants' decisions tightly tracked a computational model that makes 2 assumptions: (a) people hold optimistic initial expectations about advisors, and (b) people preferentially incorporate information that adheres to their expectations when learning about advisors. Consistent with model predictions, explicitly manipulating participants' initial expectations altered their optimism bias and subsequent advice-taking. With well-calibrated initial expectations, participants no longer exhibited an optimism bias. We then explored crowdsourced ratings as a strategy to curb unrealistic optimism in advisors. Star ratings for each advisor were collected from an initial group of participants, which were then shown to a second group of participants. Instead of calibrating expectations, these ratings propagated and exaggerated the unrealistic optimism. Our results provide a computational account of the cognitive processes underlying inflated perceptions of expertise, and explore the boundary conditions under which they occur. We discuss the adaptive value of this optimism bias, and how our account can be extended to explain unrealistic optimism in other domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. An Introduction to Programming for Bioscientists: A Python-Based Primer

    PubMed Central

    Mura, Cameron

    2016-01-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language’s usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a “variable,” the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences. PMID:27271528

  3. An Introduction to Programming for Bioscientists: A Python-Based Primer.

    PubMed

    Ekmekci, Berk; McAnany, Charles E; Mura, Cameron

    2016-06-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a "variable," the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.

  4. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational efficiency, and discuss the generalization performance of KEMA under Rademacher principles of stability. Aligning multimodal data with KEMA reports outstanding benefits when used as a data pre-conditioner step in the standard data analysis processing chain. KEMA exhibits very good performance over competing methods in synthetic controlled examples, visual object recognition and recognition of facial expressions tasks. KEMA is especially well-suited to deal with high-dimensional problems, such as images and videos, and under complicated distortions, twists and warpings of the data manifolds. A fully functional toolbox is available at https://github.com/dtuia/KEMA.git.

  5. Performance evaluation using SYSTID time domain simulation. [computer-aid design and analysis for communication systems

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Ziemer, R. E.; Fashano, M. J.

    1975-01-01

    This paper reviews the SYSTID technique for performance evaluation of communication systems using time-domain computer simulation. An example program illustrates the language. The inclusion of both Gaussian and impulse noise models make accurate simulation possible in a wide variety of environments. A very flexible postprocessor makes possible accurate and efficient performance evaluation.

  6. Computational Modeling and Numerical Methods for Spatiotemporal Calcium Cycling in Ventricular Myocytes

    PubMed Central

    Nivala, Michael; de Lange, Enno; Rovetti, Robert; Qu, Zhilin

    2012-01-01

    Intracellular calcium (Ca) cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR), mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU) network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a variant of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5 × 5 × 5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100 × 20 × 10 CRUs, a 1-s heart time simulation takes about 10 min of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown. PMID:22586402

  7. The extended TRIP supporting VoIP routing reservation with distributed QoS

    NASA Astrophysics Data System (ADS)

    Wang, Furong; Wu, Ye

    2004-04-01

    In this paper, an existing protocol, i.e. TRIP (Telephony Routing over IP) is developed to provide distributed QoS when making resource reservations for VoIP services such as H.323, SIP. Enhanced LSs (location servers) are deployed in ITADs (IP Telephony Administrative Domains) to take in charge of intra-domain routing policy because of small propagation price. It is an easy way to find an IP telephone route for intra-domain VoIP media association and simultaneously possess intra-domain load balancing features. For those routing reservations bridging domains, inter-domain routing policy is responsible for finding the shortest inter-domain route with enough resources. I propose the routing preference policy based on QoS price when the session traffic is shaped by a token bucket, related QoS messages, and message cooperation.

  8. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  9. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  10. Domain Engineering

    NASA Astrophysics Data System (ADS)

    Bjørner, Dines

    Before software can be designed we must know its requirements. Before requirements can be expressed we must understand the domain. So it follows, from our dogma, that we must first establish precise descriptions of domains; then, from such descriptions, “derive” at least domain and interface requirements; and from those and machine requirements design the software, or, more generally, the computing systems.

  11. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  12. Large-Scale Computation of Nuclear Magnetic Resonance Shifts for Paramagnetic Solids Using CP2K.

    PubMed

    Mondal, Arobendo; Gaultois, Michael W; Pell, Andrew J; Iannuzzi, Marcella; Grey, Clare P; Hutter, Jürg; Kaupp, Martin

    2018-01-09

    Large-scale computations of nuclear magnetic resonance (NMR) shifts for extended paramagnetic solids (pNMR) are reported using the highly efficient Gaussian-augmented plane-wave implementation of the CP2K code. Combining hyperfine couplings obtained with hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals, contact, pseudocontact, and orbital-shift contributions to pNMR shifts are accessible. Due to the efficient and highly parallel performance of CP2K, a wide variety of materials with large unit cells can be studied with extended Gaussian basis sets. Validation of various approaches for the different contributions to pNMR shifts is done first for molecules in a large supercell in comparison with typical quantum-chemical codes. This is then extended to a detailed study of g-tensors for extended solid transition-metal fluorides and for a series of complex lithium vanadium phosphates. Finally, lithium pNMR shifts are computed for Li 3 V 2 (PO 4 ) 3 , for which detailed experimental data are available. This has allowed an in-depth study of different approaches (e.g., full periodic versus incremental cluster computations of g-tensors and different functionals and basis sets for hyperfine computations) as well as a thorough analysis of the different contributions to the pNMR shifts. This study paves the way for a more-widespread computational treatment of NMR shifts for paramagnetic materials.

  13. Photonic band structures solved by a plane-wave-based transfer-matrix method.

    PubMed

    Li, Zhi-Yuan; Lin, Lan-Lan

    2003-04-01

    Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method.

  14. Collusion-Resistant Audio Fingerprinting System in the Modulated Complex Lapped Transform Domain

    PubMed Central

    Garcia-Hernandez, Jose Juan; Feregrino-Uribe, Claudia; Cumplido, Rene

    2013-01-01

    Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios. PMID:23762455

  15. Communications network design and costing model technical manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    This computer model provides the capability for analyzing long-haul trunking networks comprising a set of user-defined cities, traffic conditions, and tariff rates. Networks may consist of all terrestrial connectivity, all satellite connectivity, or a combination of terrestrial and satellite connectivity. Network solutions provide the least-cost routes between all cities, the least-cost network routing configuration, and terrestrial and satellite service cost totals. The CNDC model allows analyses involving three specific FCC-approved tariffs, which are uniquely structured and representative of most existing service connectivity and pricing philosophies. User-defined tariffs that can be variations of these three tariffs are accepted as input to the model and allow considerable flexibility in network problem specification. The resulting model extends the domain of network analysis from traditional fixed link cost (distance-sensitive) problems to more complex problems involving combinations of distance and traffic-sensitive tariffs.

  16. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  17. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  18. Stationary engineering handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrocelly, K.L.

    Years ago, the only qualifications you needed to become to become an operating engineer were the ability to shovel large chunks of coal through small furnace doors and the fortitude to sweat profusely for hours without fainting. As a consequence of technological evolution, the engineer's coal shovels have been replaced with computers and now perspiration is more the result of job stress than exposure to high temperatures. The domain of the operator has been extended far beyond the smoke-filled caverns that once encased him, out into the physical plant, and his responsibilities have been expanded accordingly. Unlike his less sophisticatedmore » predecessor, today's technician must be well versed in all aspects of the operation. The field of power plant operations has become a full-fledged profession and its principals are called Stationary Engineers. This book addresses the areas of responsibility and the education and skills needed for successful operation of building services equipment.« less

  19. Normalization is a general neural mechanism for context-dependent decision making

    PubMed Central

    Louie, Kenway; Khaw, Mel W.; Glimcher, Paul W.

    2013-01-01

    Understanding the neural code is critical to linking brain and behavior. In sensory systems, divisive normalization seems to be a canonical neural computation, observed in areas ranging from retina to cortex and mediating processes including contrast adaptation, surround suppression, visual attention, and multisensory integration. Recent electrophysiological studies have extended these insights beyond the sensory domain, demonstrating an analogous algorithm for the value signals that guide decision making, but the effects of normalization on choice behavior are unknown. Here, we show that choice models using normalization generate significant (and classically irrational) choice phenomena driven by either the value or number of alternative options. In value-guided choice experiments, both monkey and human choosers show novel context-dependent behavior consistent with normalization. These findings suggest that the neural mechanism of value coding critically influences stochastic choice behavior and provide a generalizable quantitative framework for examining context effects in decision making. PMID:23530203

  20. A comprehensive computational study on pathogenic mis-sense mutations spanning the RING2 and REP domains of Parkin protein.

    PubMed

    Biswas, Ria; Bagchi, Angshuman

    2017-04-30

    Various mutations in PARK2 gene, which encodes the protein parkin, are significantly associated with the onset of autosomal recessive juvenile Parkinson (ARJP) in neuronal cells. Parkin is a multi domain protein, the N-terminal part contains the Ubl and the C-terminal part consists of four zinc coordinating domains, viz., RING0, RING1, in between ring (IBR) and RING2. Disease mutations are spread over all the domains of Parkin, although mutations in some regions may affect the functionality of Parkin more adversely. The mutations in the RING2 domain are seen to abolish the neuroprotective E3 ligase activity of Parkin. In this current work, we carried out detailed in silico analysis to study the extent of pathogenicity of mutations spanning the Parkin RING2 domain and the adjoining REP region by SIFT, Mutation Accessor, PolyPhen2, SNPs and GO, GV/GD and I-mutant. To study the structural and functional implications of these mutations on RING2-REP domain of Parkin, we studied the solvent accessibility (SASA/RSA), hydrophobicity, intra-molecular hydrogen bonding profile and domain analysis by various computational tools. Finally, we analysed the interaction energy profiles of the mutants and compared them to the wild type protein using Discovery studio 2.5. By comparing the various analyses it could be safely concluded that except P437L and A379V mutations, all other mutations were potentially deleterious affecting various structural aspects of RING2 domain architecture. This study is based purely on computational approach which has the potential to identify disease mutations and the information could further be used in treatment of diseases and prognosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Coupling between Current and Dynamic Magnetization : from Domain Walls to Spin Waves

    NASA Astrophysics Data System (ADS)

    Lucassen, M. E.

    2012-05-01

    So far, we have derived some general expressions for domain-wall motion and the spin motive force. We have seen that the β parameter plays a large role in both subjects. In all chapters of this thesis, there is an emphasis on the determination of this parameter. We also know how to incorporate thermal fluctuations for rigid domain walls, as shown above. In Chapter 2, we study a different kind of fluctuations: shot noise. This noise is caused by the fact that an electric current consists of electrons, and therefore has fluctuations. In the process, we also compute transmission and reflection coefficients for a rigid domain wall, and from them the linear momentum transfer. More work on fluctuations is done in Chapter 3. Here, we consider a (extrinsically pinned) rigid domain wall under the influence of thermal fluctuations that induces a current via spin motive force. We compute how the resulting noise in the current is related to the β parameter. In Chapter 4 we look into in more detail into the spin motive forces from field driven domain walls. Using micro magnetic simulations, we compute the spin motive force due to vortex domain walls explicitly. As mentioned before, this gives qualitatively different results than for a rigid domain wall. The final subject in Chapter 5 is the application of the general expression for spin motive forces to magnons. Although this might seem to be unrelated to domain-wall motion, this calculation allows us to relate the β parameter to macroscopic transport coefficients. This work was supported by Stichting voor Fundamenteel Onderzoek der Materie (FOM), the Netherlands Organization for Scientific Research (NWO), and by the European Research Council (ERC) under the Seventh Framework Program (FP7).

  2. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  3. Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.

  4. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques

    NASA Technical Reports Server (NTRS)

    Wu, Andy

    1995-01-01

    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  5. Using neighborhood cohesiveness to infer interactions between protein domains.

    PubMed

    Segura, Joan; Sorzano, C O S; Cuenca-Alba, Jesus; Aloy, Patrick; Carazo, J M

    2015-08-01

    In recent years, large-scale studies have been undertaken to describe, at least partially, protein-protein interaction maps, or interactomes, for a number of relevant organisms, including human. However, current interactomes provide a somehow limited picture of the molecular details involving protein interactions, mostly because essential experimental information, especially structural data, is lacking. Indeed, the gap between structural and interactomics information is enlarging and thus, for most interactions, key experimental information is missing. We elaborate on the observation that many interactions between proteins involve a pair of their constituent domains and, thus, the knowledge of how protein domains interact adds very significant information to any interactomic analysis. In this work, we describe a novel use of the neighborhood cohesiveness property to infer interactions between protein domains given a protein interaction network. We have shown that some clustering coefficients can be extended to measure a degree of cohesiveness between two sets of nodes within a network. Specifically, we used the meet/min coefficient to measure the proportion of interacting nodes between two sets of nodes and the fraction of common neighbors. This approach extends previous works where homolog coefficients were first defined around network nodes and later around edges. The proposed approach substantially increases both the number of predicted domain-domain interactions as well as its accuracy as compared with current methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  7. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  8. A Quasi-Dynamic Approach to modelling Hydrodynamic Focusing

    NASA Astrophysics Data System (ADS)

    Kommajosula, Aditya; Xu, Songzhe; Wu, Chueh-Yu; di Carlo, Dino; Ganapathysubramanian, Baskar; ComPM Lab Team; Di Carlo Lab Collaboration

    2016-11-01

    We examine a particle's tendency at different spatial locations to shift/rotate towards the equilibrium location, by constrained simulation. Although studies in the past have used this procedure in conjunction with FSI methods to great effect, the current work in 2D explores an alternative approach by utilizing a modified trust-region-based root-finding algorithm to solve for particle position and velocities at equilibrium, using "snapshots" of finite-element solutions to the steady-state Navier-Stokes equations iteratively over a computational domain attached to the particle reference frame. Through an assortment of test cases comprising circular and non-circular particle geometries, an incorporation of stability theory as applicable to dynamical systems is demonstrated, to locate the final focusing location and velocities. The results are compared with previous experimental/numerical reports, and found to be in close agreement. A thousand-fold increase is observed in computational time for the current workflow from its transient counterpart, for an illustrative case. The current framework is formulated in 2D for 3 Degrees-of-Freedom, and will be extended to 3D. This framework potentially allows for quick, high-throughput parametric space studies of equilibrium scaling laws.

  9. Extension of research data repository system to support direct compute access to biomedical datasets: enhancing Dataverse to support large datasets

    PubMed Central

    McKinney, Bill; Meyer, Peter A.; Crosas, Mercè; Sliz, Piotr

    2016-01-01

    Access to experimental X-ray diffraction image data is important for validation and reproduction of macromolecular models and indispensable for the development of structural biology processing methods. In response to the evolving needs of the structural biology community, we recently established a diffraction data publication system, the Structural Biology Data Grid (SBDG, data.sbgrid.org), to preserve primary experimental datasets supporting scientific publications. All datasets published through the SBDG are freely available to the research community under a public domain dedication license, with metadata compliant with the DataCite Schema (schema.datacite.org). A proof-of-concept study demonstrated community interest and utility. Publication of large datasets is a challenge shared by several fields, and the SBDG has begun collaborating with the Institute for Quantitative Social Science at Harvard University to extend the Dataverse (dataverse.org) open-source data repository system to structural biology datasets. Several extensions are necessary to support the size and metadata requirements for structural biology datasets. In this paper, we describe one such extension—functionality supporting preservation of filesystem structure within Dataverse—which is essential for both in-place computation and supporting non-http data transfers. PMID:27862010

  10. Time-dependent spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cole, Justin T.; Musslimani, Ziad H.

    2017-11-01

    The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.

  11. All Roads Lead to Computing: Making, Participatory Simulations, and Social Computing as Pathways to Computer Science

    ERIC Educational Resources Information Center

    Brady, Corey; Orton, Kai; Weintrop, David; Anton, Gabriella; Rodriguez, Sebastian; Wilensky, Uri

    2017-01-01

    Computer science (CS) is becoming an increasingly diverse domain. This paper reports on an initiative designed to introduce underrepresented populations to computing using an eclectic, multifaceted approach. As part of a yearlong computing course, students engage in Maker activities, participatory simulations, and computing projects that…

  12. Final Report for "Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinath Vadlamani; Scott Kruger; Travis Austin

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less

  13. Good coupling for the multiscale patch scheme on systems with microscale heterogeneity

    NASA Astrophysics Data System (ADS)

    Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.

    2017-05-01

    Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.

  14. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  15. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.

  16. Smoothing Data Friction through building Service Oriented Data Platforms

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Richards, C. J.; Evans, B. J. K.; Wang, J.; Druken, K. A.

    2017-12-01

    Data Friction has been commonly defined as the costs in time, energy and attention required to simply collect, check, store, move, receive, and access data. On average, researchers spend a significant fraction of their time finding the data for their research project and then reformatting it so that it can be used by the software application of their choice. There is an increasing role for both data repositories and software to be modernised to help reduce data friction in ways that support the better use of the data. Many generic data repositories simply accept data in the format as supplied: the key check is that the data have sufficient metadata to enable discovery and download. Few generic repositories have both the expertise and infrastructure to support the multiple domain specific requirements that facilitate the increasing need for integration and reusability. In contrast, major science domain-focused repositories are increasingly able to implement and enforce community endorsed best practices and guidelines that ensure reusability and harmonization of data for use within the community by offering semi-automated QC workflows to improve quality of submitted data. The most advanced of these science repositories now operate as service-oriented data platforms that extend the use of data across domain silos and increasingly provide server-side programmatically-enabled access to data via network protocols and community standard APIs. To provide this, more rigorous QA/QC procedures are needed to validate data against standards and community software and tools. This ensures that the data can be accessed in expected ways and also demonstrates that the data works across different (non-domain specific) packages, tools and programming languages deployed by the various user communities. In Australia, the National Computational Infrastructure (NCI) has created such a service-oriented data platform which is demonstrating how this approach can reduce data friction, servicing both individual domains as well as facilitating cross-domain collaboration. The approach has required an increase in effort for the repository to provide the additional expertise, so as to enable a better capability and efficient system which ultimately saves time by the individual researcher.

  17. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  18. Membrane covered duct lining for high-frequency noise attenuation: prediction using a Chebyshev collocation method.

    PubMed

    Huang, Lixi

    2008-11-01

    A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height.

  19. Extending (Q)SARs to incorporate proprietary knowledge for regulatory purposes: A case study using aromatic amine mutagenicity.

    PubMed

    Ahlberg, Ernst; Amberg, Alexander; Beilke, Lisa D; Bower, David; Cross, Kevin P; Custer, Laura; Ford, Kevin A; Van Gompel, Jacky; Harvey, James; Honma, Masamitsu; Jolly, Robert; Joossens, Elisabeth; Kemper, Raymond A; Kenyon, Michelle; Kruhlak, Naomi; Kuhnke, Lara; Leavitt, Penny; Naven, Russell; Neilan, Claire; Quigley, Donald P; Shuey, Dana; Spirkl, Hans-Peter; Stavitskaya, Lidiya; Teasdale, Andrew; White, Angela; Wichard, Joerg; Zwickl, Craig; Myatt, Glenn J

    2016-06-01

    Statistical-based and expert rule-based models built using public domain mutagenicity knowledge and data are routinely used for computational (Q)SAR assessments of pharmaceutical impurities in line with the approach recommended in the ICH M7 guideline. Knowledge from proprietary corporate mutagenicity databases could be used to increase the predictive performance for selected chemical classes as well as expand the applicability domain of these (Q)SAR models. This paper outlines a mechanism for sharing knowledge without the release of proprietary data. Primary aromatic amine mutagenicity was selected as a case study because this chemical class is often encountered in pharmaceutical impurity analysis and mutagenicity of aromatic amines is currently difficult to predict. As part of this analysis, a series of aromatic amine substructures were defined and the number of mutagenic and non-mutagenic examples for each chemical substructure calculated across a series of public and proprietary mutagenicity databases. This information was pooled across all sources to identify structural classes that activate or deactivate aromatic amine mutagenicity. This structure activity knowledge, in combination with newly released primary aromatic amine data, was incorporated into Leadscope's expert rule-based and statistical-based (Q)SAR models where increased predictive performance was demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Large-eddy simulations of a solid-rocket booster jet

    NASA Astrophysics Data System (ADS)

    Paoli, Roberto; Poubeau, Adele; Cariolle, Daniel

    2014-11-01

    Emissions from solid-rocket boosters are responsible for a severe decrease in ozone concentration in the rocket plume during the first hours after a launch. The main source of ozone depletion is due to hydrogen chloride that is converted into chlorine in the high temperature regions of the jet (afterburning). The objective of this study is to evaluate the active chlorine concentration in the plume of a solid-rocket booster using large-eddy simulations. The gas is injected through the entire nozzle of the booster and a local time-stepping method based on coupling multi-instances of a fluid solver is used to extend the computational domain up to 600 nozzle exit diameters. The methodology is validated for a non-reactive case by analyzing the flow characteristics of supersonic co-flowing under expanded jets. Then, the chemistry of chlorine is studied offline using a complex chemistry solver and the LES data extracted from the mean trajectories of sample fluid particles. Finally, the online chemistry is analyzed by means of the multispecies version of the LES solver using a reduced chemistry scheme. The LES are able to capture the mixing of the exhaust with ambient air and the species concentrations, which is also useful to initialize atmospheric simulations on larger domains.

Top