Science.gov

Sample records for computational geometry approach

  1. A computational approach to continuum damping of Alfven waves in two and three-dimensional geometry

    SciTech Connect

    Koenies, Axel; Kleiber, Ralf

    2012-12-15

    While the usual way of calculating continuum damping of global Alfven modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  2. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  3. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  4. Computational synthetic geometry

    SciTech Connect

    Sturmfels, B. )

    1988-01-01

    This book deals with methods for realizing abstract geometric objects in concrete vector spaces. It considers a large class of problems from convexity and discrete geometry including constructing convex polytopes from simplicial complexes, vector geometries from incidence structures and hyperplane arrangements from oriented matroids. It appears that algorithms for these constructions exist if and only if arbitrary polynomial equations are decidable with respect to the underlying field. Besides such complexity theorems, a variety of symbolic algorithms are discussed, and the methods are applied to obtain mathematical results on convex polytopes, projective configurations and the combinatories of Grassmann varieties.

  5. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  6. A Whirlwind Tour of Computational Geometry.

    ERIC Educational Resources Information Center

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  7. Computational Approaches to the Determination of the Molecular Geometry of Acrolein in its T_1(n,π*) State

    NASA Astrophysics Data System (ADS)

    McAnally, Michael O.; Hlavacek, Nikolaus C.; Drucker, Stephen

    2012-06-01

    The spectroscopically derived inertial constants for acrolein (propenal) in its T_1(n,π*) state were used to test predictions from a variety of computational methods. One focus was on multiconfigurational methods, such as CASSCF and CASPT2, that are applicable to excited states. We also examined excited-state methods that utilize single reference configurations, including EOM-EE-CCSD and TD-PBE0. Finally, we applied unrestricted ground-state techniques, such as UCCSD(T) and the more economical UPBE0 method, to the T_1(n,π*) excited state under the constraint of C_s symmetry. The unrestricted ground-state methods are applicable because at a planar geometry, the T_1(n,π*) state of acrolein is the lowest-energy state of its spin multiplicity. Each of the above methods was used with a triple zeta quality basis set to optimize the T_1(n,π*) geometry. This procedure resulted in the following sets of inertial constants: Inertial constants (cm-1) of acrolein in its T_1(n,π*) state Method A B C Method A B C CASPT2(6,5) 1.667 0.1491 0.1368 UCCSD(T)^b 1.668 0.1480 0.1360 CASSCF(6,5) 1.667 0.1491 0.1369 UPBE0 1.699 0.1487 0.1367 EOM-EE-CCSD 1.675 0.1507 0.1383 TD-PBE0 1.719 0.1493 0.1374 Experiment^a 1.662 0.1485 0.1363 The two multiconfigurational methods produce the same inertial constants, and those constants agree closely with experiment. However the sets of computed bond lengths differ significantly for the two methods. In the CASSCF calculation, the lengthening of the C=O and C=C bonds and the shortening of the C--C bond are more pronounced than in CASPT2. O. S. Bokareva et al., Int. J. Quant. Chem. {108}, 2719 (2008).

  8. A cell-centered Lagrangian finite volume approach for computing elasto-plastic response of solids in cylindrical axisymmetric geometries

    NASA Astrophysics Data System (ADS)

    Sambasivan, Shiv Kumar; Shashkov, Mikhail J.; Burton, Donald E.

    2013-03-01

    A finite volume cell-centered Lagrangian formulation is presented for solving large deformation problems in cylindrical axisymmetric geometries. Since solid materials can sustain significant shear deformation, evolution equations for stress and strain fields are solved in addition to mass, momentum and energy conservation laws. The total strain-rate realized in the material is split into an elastic and plastic response. The elastic and plastic components in turn are modeled using hypo-elastic theory. In accordance with the hypo-elastic model, a predictor-corrector algorithm is employed for evolving the deviatoric component of the stress tensor. A trial elastic deviatoric stress state is obtained by integrating a rate equation, cast in the form of an objective (Jaumann) derivative, based on Hooke's law. The dilatational response of the material is modeled using an equation of state of the Mie-Grüneisen form. The plastic deformation is accounted for via an iterative radial return algorithm constructed from the J2 von Mises yield condition. Several benchmark example problems with non-linear strain hardening and thermal softening yield models are presented. Extensive comparisons with representative Eulerian and Lagrangian hydrocodes in addition to analytical and experimental results are made to validate the current approach.

  9. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  10. Quadric solids and computational geometry

    SciTech Connect

    Emery, J.D.

    1980-07-25

    As part of the CAD-CAM development project, this report discusses the mathematics underlying the program QUADRIC, which does computations on objects modeled as Boolean combinations of quadric half-spaces. Topics considered include projective space, quadric surfaces, polars, affine transformations, the construction of solids, shaded image, the inertia tensor, moments, volume, surface integrals, Monte Carlo integration, and stratified sampling. 1 figure.

  11. Computational fluid dynamics using CATIA created geometry

    NASA Astrophysics Data System (ADS)

    Gengler, Jeanne E.

    1989-07-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  12. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  13. Representational geometry: integrating cognition, computation, and the brain.

    PubMed

    Kriegeskorte, Nikolaus; Kievit, Rogier A

    2013-08-01

    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure.

  14. Representational geometry: integrating cognition, computation, and the brain

    PubMed Central

    Kriegeskorte, Nikolaus; Kievit, Rogier A.

    2013-01-01

    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. PMID:23876494

  15. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  16. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  17. Extraction of human stomach using computational geometry

    NASA Astrophysics Data System (ADS)

    Aisaka, Kazuo; Arai, Kiyoshi; Tsutsui, Kumiko; Hashizume, Akihide

    1991-06-01

    This paper presents a method for extracting the profile of the stomach by computational geometry. The stomach is difficult to recognize from an X-ray because of its elasticity. Global information of the stomach shape is required for recognition. The method has three steps. In the first step, the edge is enhanced, and then edge pieces are found as candidates for the border. Because the resulting border is almost always incomplete, a method for connecting the pieces is required. The second step uses computational geometry to create the global structure from the edge pieces. A Delaunay graph is drawn from the end points of the pieces. This enables us to decide which pieces are most likely to connect. The third step uses the shape of a stomach to find the best sequence of pieces. The knowledge is described in simple LISP functions. Because a Delaunay graph is planar, we can reduce the number of candidate pieces while searching for the most likely sequence. We applied this method to seven stomach pictures taken by the double contrast method and found the greater curvature in six cases. Enhancing the shape knowledge will increase the number of recognizable parts.

  18. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Durmus, Soner; Karakirik, Erol

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any computer software…

  19. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Karakirik, Erol; Durmus, Soner

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any compute software…

  20. Teaching Geometry: An Experiential and Artistic Approach.

    ERIC Educational Resources Information Center

    Ogletree, Earl J.

    The view that geometry should be taught at every grade level is promoted. Primary and elementary school children are thought to rarely have any direct experience with geometry, except on an incidental basis. Children are supposed to be able to learn geometry rather easily, so long as the method and content are adapted to their development and…

  1. Parallel computation of geometry control in adaptive truss structures

    NASA Technical Reports Server (NTRS)

    Ramesh, A. V.; Utku, S.; Wada, B. K.

    1992-01-01

    The fast computation of geometry control in adaptive truss structures involves two distinct parts: the efficient integration of the inverse kinematic differential equations that govern the geometry control and the fast computation of the Jacobian, which appears on the right-hand-side of the inverse kinematic equations. This paper present an efficient parallel implementation of the Jacobian computation on an MIMD machine. Large speedup from the parallel implementation is obtained, which reduces the Jacobian computation to an O(M-squared/n) procedure on an n-processor machine, where M is the number of members in the adaptive truss. The parallel algorithm given here is a good candidate for on-line geometry control of adaptive structures using attached processors.

  2. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  3. Grid generation and inviscid flow computation about aircraft geometries

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1989-01-01

    Grid generation and Euler flow about fighter aircraft are described. A fighter aircraft geometry is specified by an area ruled fuselage with an internal duct, cranked delta wing or strake/wing combinations, canard and/or horizontal tail surfaces, and vertical tail surfaces. The initial step before grid generation and flow computation is the determination of a suitable grid topology. The external grid topology that has been applied is called a dual-block topology which is a patched C (exp 1) continuous multiple-block system where inner blocks cover the highly-swept part of a cranked wing or strake, rearward inner-part of the wing, and tail components. Outer-blocks cover the remainder of the fuselage, outer-part of the wing, canards and extend to the far field boundaries. The grid generation is based on transfinite interpolation with Lagrangian blending functions. This procedure has been applied to the Langley experimental fighter configuration and a modified F-18 configuration. Supersonic flow between Mach 1.3 and 2.5 and angles of attack between 0 degrees and 10 degrees have been computed with associated Euler solvers based on the finite-volume approach. When coupling geometric details such as boundary layer diverter regions, duct regions with inlets and outlets, or slots with the general external grid, imposing C (exp 1) continuity can be extremely tedious. The approach taken here is to patch blocks together at common interfaces where there is no grid continuity, but enforce conservation in the finite-volume solution. The key to this technique is how to obtain the information required for a conservative interface. The Ramshaw technique which automates the computation of proportional areas of two overlapping grids on a planar surface and is suitable for coding was used. Researchers generated internal duct grids for the Langley experimental fighter configuration independent of the external grid topology, with a conservative interface at the inlet and outlet.

  4. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID

  5. Using Computer-Assisted Multiple Representations in Learning Geometry Proofs

    ERIC Educational Resources Information Center

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Hsi-Hsun; Cheng, Ying-Hao

    2011-01-01

    Geometry theorem proving involves skills that are difficult to learn. Instead of working with abstract and complicated representations, students might start with concrete, graphical representations. A proof tree is a graphical representation of a formal proof, with each node representing a proposition or given conditions. A computer-assisted…

  6. Investigating the geometry of pig airways using computed tomography

    NASA Astrophysics Data System (ADS)

    Mansy, Hansen A.; Azad, Md Khurshidul; McMurray, Brandon; Henry, Brian; Royston, Thomas J.; Sandler, Richard H.

    2015-03-01

    Numerical modeling of sound propagation in the airways requires accurate knowledge of the airway geometry. These models are often validated using human and animal experiments. While many studies documented the geometric details of the human airways, information about the geometry of pig airways is scarcer. In addition, the morphology of animal airways can be significantly different from that of humans. The objective of this study is to measure the airway diameter, length and bifurcation angles in domestic pigs using computed tomography. After imaging the lungs of 3 pigs, segmentation software tools were used to extract the geometry of the airway lumen. The airway dimensions were then measured from the resulting 3 D models for the first 10 airway generations. Results showed that the size and morphology of the airways of different animals were similar. The measured airway dimensions were compared with those of the human airways. While the trachea diameter was found to be comparable to the adult human, the diameter, length and branching angles of other airways were noticeably different from that of humans. For example, pigs consistently had an early airway branching from the trachea that feeds the superior (top) right lung lobe proximal to the carina. This branch is absent in the human airways. These results suggested that the human geometry may not be a good approximation of the pig airways and may contribute to increasing the errors when the human airway geometric values are used in computational models of the pig chest.

  7. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  8. Computer aided design and analysis of gear tooth geometry

    NASA Technical Reports Server (NTRS)

    Chang, S. H.; Huston, R. L.

    1987-01-01

    A simulation method for gear hobbing and shaping of straight and spiral bevel gears is presented. The method is based upon an enveloping theory for gear tooth profile generation. The procedure is applicable in the computer aided design of standard and nonstandard tooth forms. An inverse procedure for finding a conjugate gear tooth profile is presented for arbitrary cutter geometry. The kinematic relations for the tooth surfaces of straight and spiral bevel gears are proposed. The tooth surface equations for these gears are formulated in a manner suitable for their automated numerical development and solution.

  9. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    SciTech Connect

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  10. Representing Range Compensators with Computational Geometry in TOPAS

    SciTech Connect

    Iandola, Forrest N.; /Illinois U., Urbana /SLAC

    2012-09-07

    In a proton therapy beamline, the range compensator modulates the beam energy, which subsequently controls the depth at which protons deposit energy. In this paper, we introduce two computational representations of range compensator. One of our compensator representations, which we refer to as a subtraction solid-based range compensator, precisely represents the compensator. Our other representation, the 3D hexagon-based range compensator, closely approximates the compensator geometry. We have implemented both of these compensator models in a proton therapy Monte Carlo simulation called TOPAS (Tool for Particle Simulation). In the future, we will present a detailed study of the accuracy and runtime performance trade-offs between our two range compensator representations.

  11. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  12. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  13. SU-E-I-12: Flexible Geometry Computed Tomography

    SciTech Connect

    Shaw, R

    2015-06-15

    Purpose: The concept separates the mechanical connection between the radiation source and detector. This design allows the trajectory and orientation of the radiation source/detector to be customized to the object that is being imaged. This is in contrast to the formulaic rotation-translation image acquisition of conventional computed tomography(CT).Background/significance:CT devices that image a full range of: anatomy, patient populations, and imaging procedures are large. The root cause of the expanding size of comprehensive CT is due to the commitment to helical geometry that is hardwired into the image reconstruction. FGCT extends the application of alternative reconstruction techniques, i.e. tomosynthesis, by separating the two main components— radiation source and detector— and allow for 6 degrees of freedom motion for radiation source, detector, or both. The image acquisition geometry is then tailored to how the patient/object is positioned. This provides greater flexibility on the position and location that the patient/object is being imaged. Additionally, removing the need of a rotating gantry reduces the footprint so that CT is more mobile and more available to move to where the patient/object is at, instead of the other way around. Methods: As proof-of-principle, a reconstruction algorithm is designed to produce FGCT images. Using simulated detector data, voxels intersecting a line drawn between the radiation source and an individual detector are traced and modified using the detector signal. The detector signal is modified to compensate for changes in the source to detector distance. Adjacent voxels are modified in proportion to the detector signal, providing a simple image filter. Results: Image-quality from the proposed FGCT reconstruction technique is proving to be a challenge, producing hardily recognizable images from limited projections angles. Conclusion: Preliminary assessment of the reconstruction technique demonstrates the inevitable

  14. Geometry of a Quantized Spacetime: The Quantum Potential Approach

    NASA Astrophysics Data System (ADS)

    Mirza, Babur M.

    2014-03-01

    Quantum dynamics in a curved spacetime can be studied using a modified Lagrangian approach directly in terms of the spacetime variables [Mirza, B.M., Quantum Dynamics in Black Hole Spacetimes, IC-MSQUARE 2012]. Here we investigate the converse problem of determining the nature of the background spacetime when quantum dynamics of a test particle is known. We employ the quantum potential formalism here to obtain the modifications introduced by the quantum effects to the background spacetime. This leads to a novel geometry for the spacetime in which a test particle modifies the spacetime via interaction through the quantum potential. We present here the case of a Gaussian wave packet, and a localized quantum soliton, representing the test particle, and determine the corresponding geometries that emerge.

  15. Computer code to interchange CDS and wave-drag geometry formats

    NASA Technical Reports Server (NTRS)

    Johnson, V. S.; Turnock, D. L.

    1986-01-01

    A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.

  16. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  17. Studies in computational geometry motivated by mesh generation

    SciTech Connect

    Smith, W.D.

    1989-01-01

    This thesis sprawls over most of discrete and computational geometry. There are four loose bodies of theory developed. (1) A quantitative and algorithmic theory of crossing number and crossing-free line segment graphs in the plane. As five applications of this theory: the author disproves two long - standing conjectures on the crossing number of the complete and complete bipartite graphs, he presents the first exponential algorithm for planar minimum Steiner tree, and the first subexponential algorithms for planar traveling salesman tour and optimum triangulation, and he presents an algorithm for generating all non-isomorphic V-vertex planar graphs, in O(V{sup 3)}time per graph, using O(V) total workspace. (2) Mesh generation, and the triangulation of polytopes: He has the strongest bounds on the number of d-simplices required to triangulate the d-cube, and new triangulation methods in the plane. A quantitative and qualitative - and practical - theory of finite element mesh quality suggest a new, simple strategy for generating good meshes. (3) The theory of geometrical graphs on N point sites in d-space. This subsumes many new results in: geometrical probability, sphere packing, and extremal configurations. An array of new multidimensional search date structures are used to devise fast algorithms for construction many geometrical graphs. (4) Useful new results concerning the mensuration and structure of d-polytopes. In particular he extensively generalizes the famous formula of Heron and Alexandria (75 AD), for the area of a triangle, and he presents the first linear time congruence algorithm for 3 -dimensional polyhedra. He closes with the largest bibliography of the field, containing over 3000 references.

  18. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) techniques were applied to the Launch Abort System (LAS) of the NASA Crew Exploration Vehicle (CEV) parametric geometry Computational Fluid Dynamics (CFD) study to efficiently identify and rank the primary contributors to the integrated drag over the vehicles ascent trajectory. Typical approaches to these types of activities involve developing all possible combinations of geometries changing one variable at a time, analyzing them with CFD, and predicting the main effects on an aerodynamic parameter, which in this application is integrated drag. The original plan for the LAS study team was to generate and analyze more than1000 geometry configurations to study 7 geometric parameters. By utilizing DOE techniques the number of geometries was strategically reduced to 84. In addition, critical information on interaction effects among the geometric factors were identified that would not have been possible with the traditional technique. Therefore, the study was performed in less time and provided more information on the geometric main effects and interactions impacting drag generated by the LAS. This paper discusses the methods utilized to develop the experimental design, execution, and data analysis.

  19. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    ). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the

  20. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.

    PubMed

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively.

  1. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation

    NASA Astrophysics Data System (ADS)

    Yang, Yidong; Armour, Michael; Kang-Hsin Wang, Ken; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (‘tubular’ geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (‘pancake’ geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry

  2. Experimental demonstration of novel imaging geometries for x-ray fluorescence computed tomography

    PubMed Central

    Fu, Geng; Meng, Ling-Jian; Eng, Peter; Newville, Matt; Vargas, Phillip; Riviere, Patrick La

    2013-01-01

    Purpose: X-ray fluorescence computed tomography (XFCT) is an emerging imaging modality that maps the three-dimensional distribution of elements, generally metals, in ex vivo specimens and potentially in living animals and humans. At present, it is generally performed at synchrotrons, taking advantage of the high flux of monochromatic x rays, but recent work has demonstrated the feasibility of using laboratory-based x-ray tube sources. In this paper, the authors report the development and experimental implementation of two novel imaging geometries for mapping of trace metals in biological samples with ∼50–500 μm spatial resolution. Methods: One of the new imaging approaches involves illuminating and scanning a single slice of the object and imaging each slice's x-ray fluorescent emissions using a position-sensitive detector and a pinhole collimator. The other involves illuminating a single line through the object and imaging the emissions using a position-sensitive detector and a slit collimator. They have implemented both of these using synchrotron radiation at the Advanced Photon Source. Results: The authors show that it is possible to achieve 250 eV energy resolution using an electron multiplying CCD operating in a quasiphoton-counting mode. Doing so allowed them to generate elemental images using both of the novel geometries for imaging of phantoms and, for the second geometry, an osmium-stained zebrafish. Conclusions: The authors have demonstrated the feasibility of these two novel approaches to XFCT imaging. While they use synchrotron radiation in this demonstration, the geometries could readily be translated to laboratory systems based on tube sources. PMID:23718594

  3. PERTURBATION APPROACH FOR QUANTUM COMPUTATION

    SciTech Connect

    G. P. BERMAN; D. I. KAMENEV; V. I. TSIFRINOVICH

    2001-04-01

    We discuss how to simulate errors in the implementation of simple quantum logic operations in a nuclear spin quantum computer with many qubits, using radio-frequency pulses. We verify our perturbation approach using the exact solutions for relatively small (L = 10) number of qubits.

  4. Ideal spiral bevel gears: A new approach to surface geometry

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Coy, J. J.

    1980-01-01

    The fundamental geometrical characteristics of spiral bevel gear tooth surfaces are discussed. The parametric representation of an ideal spiral bevel tooth is developed based on the elements of involute geometry, differential geometry, and fundamental gearing kinematics. A foundation is provided for the study of nonideal gears and the effects of deviations from ideal geometry on the contact stresses, lubrication, wear, fatigue life, and gearing kinematics.

  5. Ideal spiral bevel gears - A new approach to surface geometry

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Coy, J. J.

    1980-01-01

    This paper discusses the fundamental geometrical characteristics of spiral bevel gear tooth surfaces. The parametric representation of an ideal spiral bevel tooth is developed. The development is based on the elements of involute geometry, differential geometry, and fundamental gearing kinematics. A foundation is provided for the study of nonideal gears and the effects of deviations from ideal geometry on the contact stresses, lubrication, wear, fatigue life, and gearing kinematics.

  6. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  7. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  8. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  9. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  10. DMG-α--a computational geometry library for multimolecular systems.

    PubMed

    Szczelina, Robert; Murzyn, Krzysztof

    2014-11-24

    The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.

  11. Computing Spatio-Temporal Multiple View Geometry from Mutual Projections of Multiple Cameras

    NASA Astrophysics Data System (ADS)

    Wan, Cheng; Sato, Jun

    The spatio-temporal multiple view geometry can represent the geometry of multiple images in the case where non-rigid arbitrary motions are viewed from multiple translational cameras. However, it requires many corresponding points and is sensitive to the image noise. In this paper, we investigate mutual projections of cameras in four-dimensional space and show that it enables us to reduce the number of corresponding points required for computing the spatio-temporal multiple view geometry. Surprisingly, take three views for instance, we no longer need any corresponding point to calculate the spatio-temporal multiple view geometry, if all the cameras are projected to the other cameras mutually for two time intervals. We also show that the stability of the computation of spatio-temporal multiple view geometry is drastically improved by considering the mutual projections of cameras.

  12. A functional approach to geometry optimization of complex systems

    NASA Astrophysics Data System (ADS)

    Maslen, P. E.

    A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.

  13. Computer-Generated Geometry Instruction: A Preliminary Study

    ERIC Educational Resources Information Center

    Kang, Helen W.; Zentall, Sydney S.

    2011-01-01

    This study hypothesized that increased intensity of graphic information, presented in computer-generated instruction, could be differentially beneficial for students with hyperactivity and inattention by improving their ability to sustain attention and hold information in-mind. To this purpose, 18 2nd-4th grade students, recruited from general…

  14. Interactive Geometry in the B.C. (Before Computers) Era

    ERIC Educational Resources Information Center

    Whittaker, Heather; Johnson, Iris DeLoach

    2005-01-01

    A 3-by-5 card is used to represent two or more sets of parallel lines, four right angles, opposite sides congruent and to investigate the Pythagorean theorem, similar triangles, and the tangent ratio before the introduction of computers. Students were asked to draw two parallel lines, cross them with a transversal and label the angles, which…

  15. Application of Computer Axial Tomography (CAT) to measuring crop canopy geometry. [corn and soybeans

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Vanderbilt, V. C. (Principal Investigator); Kilgore, R. W.

    1981-01-01

    The feasibility of using the principles of computer axial topography (CAT) to quantify the structure of crop canopies was investigated because six variables are needed to describe the position-orientation with time of a small piece of canopy foliage. Several cross sections were cut through the foliage of healthy, green corn and soybean canopies in the dent and full pod development stages, respectively. A photograph of each cross section representing the intersection of a plane with the foliage was enlarged and the air-foliage boundaries delineated by the plane were digitized. A computer program was written and used to reconstruct the cross section of the canopy. The approach used in applying optical computer axial tomography to measuring crop canopy geometry shows promise of being able to provide needed geometric information for input data to canopy reflectance models. The difficulty of using the CAT scanner to measure large canopies of crops like corn is discussed and a solution is proposed involving the measurement of plants one at a time.

  16. MHRDRing Z-Pinches and Related Geometries: Four Decades of Computational Modeling Using Still Unconventional Methods

    SciTech Connect

    Lindemuth, Irvin R.

    2009-01-21

    For approximately four decades, Z-pinches and related geometries have been computationally modeled using unique Alternating Direction Implicit (ADI) numerical methods. Computational results have provided illuminating and often provocative interpretations of experimental results. A number of past and continuing applications are reviewed and discussed.

  17. Potts models with magnetic field: Arithmetic, geometry, and computation

    NASA Astrophysics Data System (ADS)

    Dasu, Shival; Marcolli, Matilde

    2015-11-01

    We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.

  18. Computational aeroelastic analysis of aircraft wings including geometry nonlinearity

    NASA Astrophysics Data System (ADS)

    Tian, Binyu

    The objective of the present study is to show the ability of solving fluid structural interaction problems more realistically by including the geometric nonlinearity of the structure so that the aeroelastic analysis can be extended into the onset of flutter, or in the post flutter regime. A nonlinear Finite Element Analysis software is developed based on second Piola-Kirchhoff stress and Green-Lagrange strain. The second Piola-Kirchhoff stress and Green-Lagrange strain is a pair of energetically conjugated tensors that can accommodate arbitrary large structural deformations and deflection, to study the flutter phenomenon. Since both of these tensors are objective tensors, i.e., the rigid-body motion has no contribution to their components, the movement of the body, including maneuvers and deformation, can be included. The nonlinear Finite Element Analysis software developed in this study is verified with ANSYS, NASTRAN, ABAQUS, and IDEAS for the linear static, nonlinear static, linear dynamic and nonlinear dynamic structural solutions. To solve the flow problems by Euler/Navier equations, the current nonlinear structural software is then embedded into ENSAERO, which is an aeroelastic analysis software package developed at NASA Ames Research Center. The coupling of the two software, both nonlinear in their own field, is achieved by domain decomposition method first proposed by Guruswamy. A procedure has been set for the aeroelastic analysis process. The aeroelastic analysis results have been obtained for fight wing in the transonic regime for various cases. The influence dynamic pressure on flutter has been checked for a range of Mach number. Even though the current analysis matches the general aeroelastic characteristic, the numerical value not match very well with previous studies and needs farther investigations. The flutter aeroelastic analysis results have also been plotted at several time points. The influences of the deforming wing geometry can be well seen

  19. Software-based geometry operations for 3D computer graphics

    NASA Astrophysics Data System (ADS)

    Sima, Mihai; Iancu, Daniel; Glossner, John; Schulte, Michael; Mamidi, Suman

    2006-02-01

    In order to support a broad dynamic range and a high degree of precision, many of 3D renderings fundamental algorithms have been traditionally performed in floating-point. However, fixed-point data representation is preferable over floating-point representation in graphics applications on embedded devices where performance is of paramount importance, while the dynamic range and precision requirements are limited due to the small display sizes (current PDA's are 640 × 480 (VGA), while cell-phones are even smaller). In this paper we analyze the efficiency of a CORDIC-augmented Sandbridge processor when implementing a vertex processor in software using fixed-point arithmetic. A CORDIC-based solution for vertex processing exhibits a number of advantages over classical Multiply-and-Acumulate solutions. First, since a single primitive is used to describe the computation, the code can easily be vectorized and multithreaded, and thus fits the major Sandbridge architectural features. Second, since a CORDIC iteration consists of only a shift operation followed by an addition, the computation may be deeply pipelined. Initially, we outline the Sandbridge architecture extension which encompasses a CORDIC functional unit and the associated instructions. Then, we consider rigid-body rotation, lighting, exponentiation, vector normalization, and perspective division (which are some of the most important data-intensive 3D graphics kernels) and propose a scheme to implement them on the CORDIC-augmented Sandbridge processor. Preliminary results indicate that the performance improvement within the extended instruction set ranges from 3× to 10× (with the exception of rigid body rotation).

  20. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. [A historical approach to computing].

    PubMed

    Babini, N

    1996-12-01

    Half a century after computers' conception, its effects, already reaching all human domains, seem to aim at becoming a universal transition similar to that one which marked in Occident the passage from artisanal production to industrial production among the 18 and 19 centuries, as from mechanisation to universal automation. It's considered that if this supposition may be correct, it would be urgent to intensify the historical approach of Informatics already present in may countries but not in ours. It's proposed the gradual incorporation of history of informatics in university careers beginning from research centers including teachers and researchers preparation, production and publication of works and the creation of museums and archives, to preserve physical testimonies and origin documents and the initial evolution of this scientific-technical invention which honours our century.

  2. The flux-coordinate independent approach applied to X-point geometries

    SciTech Connect

    Hariri, F. Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-08-15

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries.

  3. Slant Path Distances Through Cells in Cylindrical Geometry and an Application to the Computation of Isophotes

    SciTech Connect

    Rodney Whitaker Eugene Symbalisty

    2007-12-17

    In computer programs involving two-dimensional cylindrical geometry, it is often necessary to calculate the slant path distance in a given direction from a point to the boundary of a mesh cell. A subroutine, HOWFAR, has been written that accomplishes this, and is very economical in computer time. An example of its use is given in constructing the isophotes for a low altitude nuclear fireball.

  4. Comparative Effects of Two Modes of Computer-Assisted Instructional Package on Solid Geometry Achievement

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Ezenwa, Victoria Ifeoma; Anyanwu, Romanus Chogozie

    2014-01-01

    The study examined the effects of two modes of computer-assisted instructional package on solid geometry achievement amongst senior secondary school students in Minna, Niger State, Nigeria. Also, the influence of gender on the performance of students exposed to CAI(AT) and CAI(AN) packages were examined. This study adopted a pretest-posttest…

  5. Computation and Visualization of Casimir Forces in Arbitrary Geometries: Nonmonotonic Lateral-Wall Forces and the Failure of Proximity-Force Approximations

    SciTech Connect

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide; Capasso, Federico

    2007-08-24

    We present a method of computing Casimir forces for arbitrary geometries, with any desired accuracy, that can directly exploit the efficiency of standard numerical-electromagnetism techniques. Using the simplest possible finite-difference implementation of this approach, we obtain both agreement with past results for cylinder-plate geometries, and also present results for new geometries. In particular, we examine a pistonlike problem involving two dielectric and metallic squares sliding between two metallic walls, in two and three dimensions, respectively, and demonstrate nonadditive and nonmonotonic changes in the force due to these lateral walls.

  6. Multivariate geometry as an approach to algal community analysis

    USGS Publications Warehouse

    Allen, T.F.H.; Skagen, S.

    1973-01-01

    Multivariate analyses are put in the context of more usual approaches to phycological investigations. The intuitive common-sense involved in methods of ordination, classification and discrimination are emphasised by simple geometric accounts which avoid jargon and matrix algebra. Warnings are given that artifacts result from technique abuses by the naive or over-enthusiastic. An analysis of a simple periphyton data set is presented as an example of the approach. Suggestions are made as to situations in phycological investigations, where the techniques could be appropriate. The discipline is reprimanded for its neglect of the multivariate approach.

  7. Learning theoretic approach to differential and perceptual geometry: I. Curvature and torsion are the independent components of space curves

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid

    2000-06-01

    In standard differential geometry, the Fundamental Theorem of Space Curves states that two differential invariants of a curve, namely curvature and torsion, determine its geometry, or equivalently, the isometry class of the curve up to rigid motions in the Euclidean three-dimensional space. Consider a physical model of a space curve made from a sufficiently thin, yet visible rigid wire, and the problem of perceptual identification (by a human observer or a robot) of two given physical model curves. In a previous paper (perceptual geometry) we have emphasized a learning theoretic approach to construct a perceptual geometry of the surfaces in the environment. In particular, we have described a computational method for mathematical representation of objects in the perceptual geometry inspired by the ecological theory of Gibson, and adhering to the principles of Gestalt in perceptual organization of vision. In this paper, we continue our learning theoretic treatment of perceptual geometry of objects, focusing on the case of physical models of space curves. In particular, we address the question of perceptually distinguishing two possibly novel space curves based on observer's prior visual experience of physical models of curves in the environment. The Fundamental Theorem of Space Curves inspires an analogous result in perceptual geometry as follows. We apply learning theory to the statistics of a sufficiently rich collection of physical models of curves, to derive two statistically independent local functions, that we call by analogy, the curvature and torsion. This pair of invariants distinguish physical models of curves in the sense of perceptual geometry. That is, in an appropriate resolution, an observer can distinguish two perceptually identical physical models in different locations. If these pairs of functions are approximately the same for two given space curves, then after possibly some changes of viewing planes, the observer confirms the two are the same.

  8. Nozzle extraction geometry of a liquid metal atomizer optimized by computer simulation of electric fields

    SciTech Connect

    Cvetkovic, S.R.; Balachandran, W.; Arnold, P.G.; Kleveland, B.; Wilson, F.G.; Zhao, A.P.

    1996-07-01

    Experimental measurements are compared with results obtained using a dedicated computer program for finite element modeling of electric fields in the vicinity of a liquid metal atomizer nozzle/tip. Good agreement between the experiment and the computer model has been achieved for two different nozzle geometries (Taylor cone and rounded tip), while paying particular attention to accuracy of the numerical solution near the tip (for equipotentials as well as derived values of field strength). In addition, the potential distribution has been calculated for several different positions of extractor voltage observed for each case. Finally, the assessment of suitability of the computer technique for qualitative consideration of the atomization process itself is presented.

  9. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  10. An interactive approach to surface-fitting complex geometries for flowfield applications

    NASA Technical Reports Server (NTRS)

    Dejarnette, Fred R.; Hamilton, H. Harris, II; Cheatwood, F. Mcneil

    1987-01-01

    Numerical flowfield methods require a geometry subprogram which can calculate body coordinates, slopes, and radii of curvature for typical aircraft and spacecraft configurations. The objective of this paper is to develop a new surface-fitting technique which addresses two major problems with existing geometry packages: computer storage requirements and the time required of the user for the initial set-up of the geometry model. In the present method, coordinates of cross sections are fit in a least-squares sense using segments of general conic sections. After fitting each cross section, the next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. For the initial setup of the geometry model, an interactive, completely menu-driven computer code has been developed to allow the user to make modifications to the initial fit for a given cross section or meridional cut. Graphic displays are provided to assist the user in the visualization of the effect of each modification. The completed model may be viewed from any angle using the code's three-dimensional graphics package. Geometry results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment (AFE) geometry are presented, in addition to calculated heat-transfer rates based on these models.

  11. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  12. Computer simulation of blood flow patterns in arteries of various geometries.

    PubMed

    Wong, P K; Johnston, K W; Ethier, C R; Cobbold, R S

    1991-11-01

    The purpose of this study is to illustrate the application of computer simulation to the study of blood flow through arteries and to demonstrate the relationship between geometry of the vessels and local flow patterns. A finite element computer program was developed to simulate steady and pulsatile blood flow by solving the continuity and Navier-Stokes equations. The accuracy of the computational method has been confirmed by comparing the numeric results to analytic solutions and to published experimental data from physical models. The results are presented as plots of the velocity vectors, streamlines, and pressure contours. The computational model has been applied to illustrate flow patterns in the following situations: pulsatile flow in a cylindric artery and an artery with an axisymmetric stenosis, steady flow in cylindric arteries with stenoses of varying severity and with different flow rates, steady flow in an artery containing a fusiform aneurysm, steady flow in a two-dimensional model of a symmetric Y-shaped bifurcation, and steady flow in a two-dimensional model of the carotid bifurcation. Regions that are commonly associated with arterial disease often coincide with zones of reversed or stagnant flow. In conclusion, the versatility and feasibility of computational simulation of blood flow is illustrated by this study. Although this mathematic model is a simplification of the real flow phenomena, it yields results that provide useful insights into the understanding of local blood flow patterns for a variety of complex geometries.

  13. A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2001-01-01

    This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.

  14. Modelling Mathematics Teachers' Intention to Use the Dynamic Geometry Environments in Macau: An SEM Approach

    ERIC Educational Resources Information Center

    Zhou, Mingming; Chan, Kan Kan; Teo, Timothy

    2016-01-01

    Dynamic geometry environments (DGEs) provide computer-based environments to construct and manipulate geometric figures with great ease. Research has shown that DGEs has positive impact on student motivation, engagement, and achievement in mathematics learning. However, the adoption of DGEs by mathematics teachers varies substantially worldwide.…

  15. Computational approaches for systems metabolomics.

    PubMed

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  16. Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations Around Iced Airfoils and Wings

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)

    2002-01-01

    Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.

  17. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  18. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  19. Assessment and improvement of mapping algorithms for non-matching meshes and geometries in computational FSI

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe

    2016-05-01

    This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.

  20. Quantitative approaches to computational vaccinology.

    PubMed

    Doytchinova, Irini A; Flower, Darren R

    2002-06-01

    This article reviews the newly released JenPep database and two new powerful techniques for T-cell epitope prediction: (i) the additive method; and (ii) a 3D-Quantitative Structure Activity Relationships (3D-QSAR) method, based on Comparative Molecular Similarity Indices Analysis (CoMSIA). The JenPep database is a family of relational databases supporting the growing need of immunoinformaticians for quantitative data on peptide binding to major histocompatibility complexes and to the Transporters associated with Antigen Processing (TAP). It also contains an annotated list of T-cell epitopes. The database is available free via the Internet (http://www.jenner.ac.uk/JenPep). The additive prediction method is based on the assumption that the binding affinity of a peptide depends on the contributions from each amino acid as well as on the interactions between the adjacent and every second side-chain. In the 3D-QSAR approach, the influence of five physicochemical properties (steric bulk, electrostatic potential, local hydrophobicity, hydrogen-bond donor and hydrogen-bond acceptor abilities) on the affinity of peptides binding to MHC molecules were considered. Both methods were exemplified through their application to the well-studied problem of peptides binding to the human class I MHC molecule HLA-A*0201. PMID:12067414

  1. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  2. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  3. Laser cone beam computed tomography scanner geometry for large volume 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Jordan, K. J.; Turnbull, D.; Batista, J. J.

    2013-06-01

    A new scanner geometry for fast optical cone-beam computed tomography is reported. The system consists of a low power laser beam, raster scanned, under computer control, through a transparent object in a refractive index matching aquarium. The transmitted beam is scattered from a diffuser screen and detected by a photomultiplier tube. Modest stray light is present in the projection images since only a single ray is present in the object during measurement and there is no imaging optics to introduce further stray light in the form of glare. A scan time of 30 minutes was required for 512 projections with a field of view of 12 × 18 cm. Initial performance from scanning a 15 cm diameter jar with black solutions is presented. Averaged reconstruction coefficients are within 2% along the height of the jar and within the central 85% of diameter, due to the index mismatch of the jar. Agreement with spectrometer measurements was better than 0.5% for a minimum transmission of 4% and within 4% for a dark, 0.1% transmission sample. This geometry's advantages include high dynamic range and low cost of scaling to larger (>15 cm) fields of view.

  4. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  5. Predicting the optimal geometry of microneedles and their array for dermal vaccination using a computational model.

    PubMed

    Römgens, Anne M; Bader, Dan L; Bouwstra, Joke A; Oomens, Cees W J

    2016-11-01

    Microneedle arrays have been developed to deliver a range of biomolecules including vaccines into the skin. These microneedles have been designed with a wide range of geometries and arrangements within an array. However, little is known about the effect of the geometry on the potency of the induced immune response. The aim of this study was to develop a computational model to predict the optimal design of the microneedles and their arrangement within an array. The three-dimensional finite element model described the diffusion and kinetics in the skin following antigen delivery with a microneedle array. The results revealed an optimum distance between microneedles based on the number of activated antigen presenting cells, which was assumed to be related to the induced immune response. This optimum depends on the delivered dose. In addition, the microneedle length affects the number of cells that will be involved in either the epidermis or dermis. By contrast, the radius at the base of the microneedle and release rate only minimally influenced the number of cells that were activated. The model revealed the importance of various geometric parameters to enhance the induced immune response. The model can be developed further to determine the optimal design of an array by adjusting its various parameters to a specific situation.

  6. Predicting the optimal geometry of microneedles and their array for dermal vaccination using a computational model.

    PubMed

    Römgens, Anne M; Bader, Dan L; Bouwstra, Joke A; Oomens, Cees W J

    2016-11-01

    Microneedle arrays have been developed to deliver a range of biomolecules including vaccines into the skin. These microneedles have been designed with a wide range of geometries and arrangements within an array. However, little is known about the effect of the geometry on the potency of the induced immune response. The aim of this study was to develop a computational model to predict the optimal design of the microneedles and their arrangement within an array. The three-dimensional finite element model described the diffusion and kinetics in the skin following antigen delivery with a microneedle array. The results revealed an optimum distance between microneedles based on the number of activated antigen presenting cells, which was assumed to be related to the induced immune response. This optimum depends on the delivered dose. In addition, the microneedle length affects the number of cells that will be involved in either the epidermis or dermis. By contrast, the radius at the base of the microneedle and release rate only minimally influenced the number of cells that were activated. The model revealed the importance of various geometric parameters to enhance the induced immune response. The model can be developed further to determine the optimal design of an array by adjusting its various parameters to a specific situation. PMID:27557398

  7. Three-dimensional analysis of root canal geometry by high-resolution computed tomography.

    PubMed

    Peters, O A; Laib, A; Rüegsegger, P; Barbakow, F

    2000-06-01

    A detailed understanding of the complexity of root canal systems is imperative to ensure successful root canal preparation. The aim of this study was to evaluate the potential and accuracy of a three-dimensional, non-destructive technique for detailing root canal geometry by means of high-resolution tomography. The anatomy of root canals in 12 extracted human maxillary molars was analyzed by means of a micro-computed tomography scanner (microCT, cubic resolution 34 microm). A special mounting device facilitated repeated precise repositioning of the teeth in the microCT. Surface areas and volumes of each canal were calculated by triangulation, and means were determined. Model-independent methods were used to evaluate the canals' diameters and configuration. The calculated and measured volumes and the areas of artificial root canals, produced by the drilling of precision holes into dentin disks, were well-correlated. Semi-automated repositioning of specimens resulted in near-perfect matching (< 1 voxel) when outer canal contours were assessed. Root canal geometry was accurately assessed by this innovative technique; therefore, variables and indices presented may serve as a basis for further analyses of root canal anatomy in experimental endodontology. PMID:10890720

  8. Grid generation and inviscid flow computation about cranked-winged airplane geometries

    NASA Technical Reports Server (NTRS)

    Eriksson, L.-E.; Smith, R. E.; Wiese, M. R.; Farr, N.

    1987-01-01

    An algebraic grid generation procedure that defines a patched multiple-block grid system suitable for fighter-type aircraft geometries with fuselage and engine inlet, canard or horizontal tail, cranked delta wing and vertical fin has been developed. The grid generation is based on transfinite interpolation and requires little computational power. A finite-volume Euler solver using explicit Runge-Kutta time-stepping has been adapted to this grid system and implemented on the VPS-32 vector processor with a high degree of vectorization. Grids are presented for an experimental aircraft with fuselage, canard, 70-20-cranked wing, and vertical fin. Computed inviscid compressible flow solutions are presented for Mach 2 at 3.79, 7 and 10 deg angles of attack. Conmparisons of the 3.79 deg computed solutions are made with available full-potential flow and Euler flow solutions on the same configuration but with another grid system. The occurrence of an unsteady solution in the 10 deg angle of attack case is discussed.

  9. On the noncommutative spin geometry of the standard Podleś sphere and index computations

    NASA Astrophysics Data System (ADS)

    Wagner, Elmar

    2009-07-01

    The purpose of the paper is twofold: First, known results of the noncommutative spin geometry of the standard Podleś sphere are extended by discussing Poincaré duality and orientability. In the discussion of orientability, Hochschild homology is replaced by a twisted version which avoids the dimension drop. The twisted Hochschild cycle representing an orientation is related to the volume form of the distinguished covariant differential calculus. Integration over the volume form defines a twisted cyclic 2-cocycle which computes the q-winding numbers of quantum line bundles. Second, a "twisted" Chern character from equivariant K0-theory to even twisted cyclic homology is introduced which gives rise to a Chern-Connes pairing between equivariant K0-theory and twisted cyclic cohomology. The Chern-Connes pairing between the equivariant K0-group of the standard Podleś sphere and the generators of twisted cyclic cohomology relative to the modular automorphism and its inverse are computed. This includes the pairings with the twisted cyclic 2-cocycle associated to the volume form, and the one corresponding to the "no-dimension drop" case. From explicit index computations, it follows that the pairings with these cocycles give the q-indices of the known equivariant 0-summable Dirac operator on the standard Podleś sphere.

  10. A computational geometry framework for the optimisation of atom probe reconstructions.

    PubMed

    Felfer, Peter; Cairney, Julie

    2016-10-01

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a 'master curve' and correct for overall shape deviations in the data.

  11. NASA geometry data exchange specification for computational fluid dynamics (NASA IGES)

    NASA Technical Reports Server (NTRS)

    Blake, Matthew W.; Kerr, Patricia A.; Thorp, Scott A.; Jou, Jin J.

    1994-01-01

    This document specifies a subset of an existing product data exchange specification that is widely used in industry and government. The existing document is called the Initial Graphics Exchange Specification. This document, a subset of IGES, is intended for engineers analyzing product performance using tools such as computational fluid dynamics (CFD) software. This document specifies how to define mathematically and exchange the geometric model of an object. The geometry is represented utilizing nonuniform rational B-splines (NURBS) curves and surfaces. Only surface models are represented; no solid model representation is included. This specification does not include most of the other types of product information available in IGES (e.g., no material properties or surface finish properties) and does not provide all the specific file format details of IGES. The data exchange protocol specified in this document is fully conforming to the American National Standard (ANSI) IGES 5.2.

  12. A computational geometry framework for the optimisation of atom probe reconstructions.

    PubMed

    Felfer, Peter; Cairney, Julie

    2016-10-01

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a 'master curve' and correct for overall shape deviations in the data. PMID:27449275

  13. Computational Approaches to Study Microbes and Microbiomes

    PubMed Central

    Greene, Casey S.; Foster, James A.; Stanton, Bruce A.; Hogan, Deborah A.; Bromberg, Yana

    2016-01-01

    Technological advances are making large-scale measurements of microbial communities commonplace. These newly acquired datasets are allowing researchers to ask and answer questions about the composition of microbial communities, the roles of members in these communities, and how genes and molecular pathways are regulated in individual community members and communities as a whole to effectively respond to diverse and changing environments. In addition to providing a more comprehensive survey of the microbial world, this new information allows for the development of computational approaches to model the processes underlying microbial systems. We anticipate that the field of computational microbiology will continue to grow rapidly in the coming years. In this manuscript we highlight both areas of particular interest in microbiology as well as computational approaches that begin to address these challenges. PMID:26776218

  14. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  15. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  16. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  17. Two-phase flow in complex geometries: A diffuse domain approach

    PubMed Central

    Aland, S.; Voigt, A.

    2011-01-01

    We present a new method for simulating two-phase flows in complex geometries, taking into account contact lines separating immiscible incompressible components. We combine the diffuse domain method for solving PDEs in complex geometries with the diffuse-interface (phase-field) method for simulating multiphase flows. In this approach, the complex geometry is described implicitly by introducing a new phase-field variable, which is a smooth approximation of the characteristic function of the complex domain. The fluid and component concentration equations are reformulated and solved in larger regular domain with the boundary conditions being implicitly modeled using source terms. The method is straightforward to implement using standard software packages; we use adaptive finite elements here. We present numerical examples demonstrating the effectiveness of the algorithm. We simulate multiphase flow in a driven cavity on an extended domain and find very good agreement with results obtained by solving the equations and boundary conditions in the original domain. We then consider successively more complex geometries and simulate a droplet sliding down a rippled ramp in 2D and 3D, a droplet flowing through a Y-junction in a microfluidic network and finally chaotic mixing in a droplet flowing through a winding, serpentine channel. The latter example actually incorporates two different diffuse domains: one describes the evolving droplet where mixing occurs while the other describes the channel. PMID:21918638

  18. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  19. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  20. An immersed boundary computational model for acoustic scattering problems with complex geometries.

    PubMed

    Sun, Xiaofeng; Jiang, Yongsong; Liang, An; Jing, Xiaodong

    2012-11-01

    An immersed boundary computational model is presented in order to deal with the acoustic scattering problem by complex geometries, in which the wall boundary condition is treated as a direct body force determined by satisfying the non-penetrating boundary condition. Two distinct discretized grids are used to discrete the fluid domain and immersed boundary, respectively. The immersed boundaries are represented by Lagrangian points and the direct body force determined on these points is applied on the neighboring Eulerian points. The coupling between the Lagrangian points and Euler points is linked by a discrete delta function. The linearized Euler equations are spatially discretized with a fourth-order dispersion-relation-preserving scheme and temporal integrated with a low-dissipation and low-dispersion Runge-Kutta scheme. A perfectly matched layer technique is applied to absorb out-going waves and in-going waves in the immersed bodies. Several benchmark problems for computational aeroacoustic solvers are performed to validate the present method.

  1. Parallel 3D computation of unsteady wake flows with complex geometries and fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Osawa, Yasuo

    New powerful parallel computational tools are developed for 3D simulation of unsteady wake flows with complex geometries and fluid-structure interactions. The base method for flow simulation is a finite element formulation for the Navier-Stokes equations. The finite element formulation is based on the streamline-upwind/Petrov-Galerkin (SUPG) and pressure-stabilizing/Petrov-Galerkin (PSPG) techniques. These stabilization techniques facilitate simulation of flows with high Reynolds numbers, and allow us to use equal-order interpolation functions for velocity and pressure without generating numerical oscillations. A multi-domain computational method is developed to simulate wake flow both in the near and far downstream. The formulations lead to coupled nonlinear equation systems which are solved, at every time step, with the Newton-Raphson method. The overall formulation and solution techniques are implemented on parallel platforms such as the CRAY T3E and SGI PowerChallenge. Two phases of vortex shedding for flow past a cylinder is simulated to verify the accuracy of this method. The Enhanced-Discretization Interface Capturing Technique (EDICT) is utilized to simulate wake flow accurately. Fluid-structure coupling solution method based on the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) formulation is applied to simulate a parachute behavior in the unsteady wake.

  2. Computational Simulations of Inferior Vena Cava (IVC) Filter Placement and Hemodynamics in Patient-Specific Geometries

    NASA Astrophysics Data System (ADS)

    Aycock, Kenneth; Sastry, Shankar; Kim, Jibum; Shontz, Suzanne; Campbell, Robert; Manning, Keefe; Lynch, Frank; Craven, Brent

    2013-11-01

    A computational methodology for simulating inferior vena cava (IVC) filter placement and IVC hemodynamics was developed and tested on two patient-specific IVC geometries: a left-sided IVC, and an IVC with a retroaortic left renal vein. Virtual IVC filter placement was performed with finite element analysis (FEA) using non-linear material models and contact modeling, yielding maximum vein displacements of approximately 10% of the IVC diameters. Blood flow was then simulated using computational fluid dynamics (CFD) with four cases for each patient IVC: 1) an IVC only, 2) an IVC with a placed filter, 3) an IVC with a placed filter and a model embolus, all at resting flow conditions, and 4) an IVC with a placed filter and a model embolus at exercise flow conditions. Significant hemodynamic differences were observed between the two patient IVCs, with the development of a right-sided jet (all cases) and a larger stagnation region (cases 3-4) in the left-sided IVC. These results support further investigation of the effects of IVC filter placement on a patient-specific basis.

  3. Fractal geometry-based classification approach for the recognition of lung cancer cells

    NASA Astrophysics Data System (ADS)

    Xia, Deshen; Gao, Wenqing; Li, Hua

    1994-05-01

    This paper describes a new fractal geometry based classification approach for the recognition of lung cancer cells, which is used in the health inspection for lung cancers, because cancer cells grow much faster and more irregularly than normal cells do, the shape of the segmented cancer cells is very irregular and considered as a graph without characteristic length. We use Texture Energy Intensity Rn to do fractal preprocessing to segment the cells from the image and to calculate the fractal dimention value for extracting the fractal features, so that we can get the figure characteristics of different cancer cells and normal cells respectively. Fractal geometry gives us a correct description of cancer-cell shapes. Through this method, a good recognition of Adenoma, Squamous, and small cancer cells can be obtained.

  4. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  5. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  6. High performance parallel computing of flows in complex geometries: II. Applications

    NASA Astrophysics Data System (ADS)

    Gourdain, N.; Gicquel, L.; Staffelbach, G.; Vermorel, O.; Duchaine, F.; Boussuge, J.-F.; Poinsot, T.

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  7. A Combined Geometric Approach for Computational Fluid Dynamics on Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    1995-01-01

    A combined geometric approach for computational fluid dynamics is presented for the analysis of unsteady flow about mechanisms in which its components are in moderate relative motion. For a CFD analysis, the total dynamics problem involves the dynamics of the aspects of geometry modeling, grid generation, and flow modeling. The interrelationships between these three aspects allow for a more natural formulation of the problem and the sharing of information which can be advantageous to the computation of the dynamics. The approach is applied to planar geometries with the use of an efficient multi-block, structured grid generation method to compute unsteady, two-dimensional and axisymmetric flow. The applications presented include the computation of the unsteady, inviscid flow about a hinged-flap with flap deflections and a high-speed inlet with centerbody motion as part of the unstart / restart operation.

  8. Development of a Computer-Aided-Design-Based Geometry and Mesh Movement Algorithm for Three-Dimensional Aerodynamic Shape Optimization

    NASA Astrophysics Data System (ADS)

    Truong, Anh Hoang

    This thesis focuses on the development of a Computer-Aided-Design (CAD)-based geometry parameterization method and a corresponding surface mesh movement algorithm suitable for three-dimensional aerodynamic shape optimization. The geometry parameterization method includes a geometry control tool to aid in the construction and manipulation of a CAD geometry through a vendor-neutral application interface, CAPRI. It automates the tedious part of the construction phase involving data entry and provides intuitive and effective design variables that allow for both the flexibility and the precision required to control the movement of the geometry. The surface mesh movement algorithm, on the other hand, transforms an initial structured surface mesh to fit the new geometry using a discrete representation of the new CAD surface provided by CAPRI. Using a unique mapping procedure, the algorithm not only preserves the characteristics of the original surface mesh, but also guarantees that the new mesh points are on the CAD geometry. The new surface mesh is then smoothed in the parametric space before it is transformed back into three-dimensional space. The procedure is efficient in that all the processing is done in the parametric space, incurring minimal computational cost. The geometry parameterization and mesh movement tools are integrated into a three-dimensional shape optimization framework, with a linear-elasticity volume-mesh movement algorithm, a Newton-Krylov flow solver for the Euler equations, and a gradient-based optimizer. The validity and accuracy of the CAD-based optimization algorithm are demonstrated through a number of verification and optimization cases.

  9. Creating geometry and mesh models for nuclear reactor core geometries using a lattice hierarchy-based approach.

    SciTech Connect

    Tautges, T. J.; Jain, R.; Mathematics and Computer Science

    2010-01-01

    Nuclear reactor cores are constructed as rectangular or hexagonal lattices of assemblies, where each assembly is itself a lattice of fuel, control, and instrumentation pins, surrounded by water or other material that moderates neutron energy and carries away fission heat. We describe a system for generating geometry and mesh for these systems. The method takes advantage of information about repeated structures in both assembly and core lattices to simplify the overall process. The system allows targeted user intervention midway through the process, enabling modification and manipulation of models for meshing or other purposes. Starting from text files describing assemblies and core, the tool can generate geometry and mesh for these models automatically as well. Simple and complex examples of tool operation are given, with the latter demonstrating generation of meshes with 12 million hexahedral elements in less than 30 minutes on a desktop workstation, using about 4 GB of memory. The tool is released as open source software as part of the MeshKit mesh generation library.

  10. Effect of inlet geometry on macrosegregation during the direct chill casting of 7050 alloy billets: experiments and computer modelling

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Eskin, D. G.; Miroux, A.; Subroto, T.; Katgerman, L.

    2012-07-01

    Controlling macrosegregation is one of the major challenges in direct-chill (DC) casting of aluminium alloys. In this paper, the effect of the inlet geometry (which influences the melt distribution) on macrosegregation during the DC casting of 7050 alloy billets was studied experimentally and by using 2D computer modelling. The ALSIM model was used to determine the temperature and flow patterns during DC casting. The results from the computer simulations show that the sump profiles and flow patterns in the billet are strongly influenced by the melt flow distribution determined by the inlet geometry. These observations were correlated to the actual macrosegregation patterns found in the as-cast billets produced by having two different inlet geometries. The macrosegregation analysis presented here may assist in determining the critical parameters to consider for improving the casting of 7XXX aluminium alloys.

  11. Computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models

    NASA Astrophysics Data System (ADS)

    Sima, Aleksandra Anna; Bonaventura, Xavier; Feixas, Miquel; Sbert, Mateu; Howell, John Anthony; Viola, Ivan; Buckley, Simon John

    2013-03-01

    Photorealistic 3D models are used for visualization, interpretation and spatial measurement in many disciplines, such as cultural heritage, archaeology and geoscience. Using modern image- and laser-based 3D modelling techniques, it is normal to acquire more data than is finally used for 3D model texturing, as images may be acquired from multiple positions, with large overlap, or with different cameras and lenses. Such redundant image sets require sorting to restrict the number of images, increasing the processing efficiency and realism of models. However, selection of image subsets optimized for texturing purposes is an example of complex spatial analysis. Manual selection may be challenging and time-consuming, especially for models of rugose topography, where the user must account for occlusions and ensure coverage of all relevant model triangles. To address this, this paper presents a framework for computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models. The framework was created to offer algorithms for candidate image subset selection, whilst supporting refinement of subsets in an intuitive and visual manner. Automatic image sorting was implemented using algorithms originating in computer science and information theory, and variants of these were compared using multiple 3D models and covering image sets, collected for geological applications. The image subsets provided by the automatic procedures were compared to manually selected sets and their suitability for 3D model texturing was assessed. Results indicate that the automatic sorting algorithms are a promising alternative to manual methods. An algorithm based on a greedy solution to the weighted set-cover problem provided image sets closest to the quality and size of the manually selected sets. The improved automation and more reliable quality indicators make the photorealistic model creation workflow more accessible for application experts

  12. Geometry, analysis, and computation in mathematics and applied sciences. Final report

    SciTech Connect

    Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

    1995-12-31

    Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

  13. Differential Item Functioning (DIF) Analysis of Computation, Word Problem and Geometry Questions across Gender and SES Groups.

    ERIC Educational Resources Information Center

    Berberoglu, Giray

    1995-01-01

    Item characteristic curves were compared across gender and socioeconomic status (SES) groups for the university entrance mathematics examination in Turkey to see if any group had an advantage in solving computation, word-problem, or geometry questions. Differential item functioning was found, and patterns are discussed. (SLD)

  14. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  15. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  16. Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry

    SciTech Connect

    Salari, K; McWherter-Payne, M

    2003-09-15

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  17. Computational flow modeling of a simplified integrated tractor-trailer geometry.

    SciTech Connect

    McWherter-Payne, Mary Anna; Salari, Kambiz

    2003-09-01

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  18. Sculpting the band gap: a computational approach.

    PubMed

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  19. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.

  20. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  1. Effect of ocular shape and vascular geometry on retinal hemodynamics: a computational model.

    PubMed

    Dziubek, Andrea; Guidoboni, Giovanna; Harris, Alon; Hirani, Anil N; Rusjan, Edmond; Thistleton, William

    2016-08-01

    A computational model for retinal hemodynamics accounting for ocular curvature is presented. The model combines (i) a hierarchical Darcy model for the flow through small arterioles, capillaries and small venules in the retinal tissue, where blood vessels of different size are comprised in different hierarchical levels of a porous medium; and (ii) a one-dimensional network model for the blood flow through retinal arterioles and venules of larger size. The non-planar ocular shape is included by (i) defining the hierarchical Darcy flow model on a two-dimensional curved surface embedded in the three-dimensional space; and (ii) mapping the simplified one-dimensional network model onto the curved surface. The model is solved numerically using a finite element method in which spatial domain and hierarchical levels are discretized separately. For the finite element method, we use an exterior calculus-based implementation which permits an easier treatment of non-planar domains. Numerical solutions are verified against suitably constructed analytical solutions. Numerical experiments are performed to investigate how retinal hemodynamics is influenced by the ocular shape (sphere, oblate spheroid, prolate spheroid and barrel are compared) and vascular architecture (four vascular arcs and a branching vascular tree are compared). The model predictions show that changes in ocular shape induce non-uniform alterations of blood pressure and velocity in the retina. In particular, we found that (i) the temporal region is affected the least by changes in ocular shape, and (ii) the barrel shape departs the most from the hemispherical reference geometry in terms of associated pressure and velocity distributions in the retinal microvasculature. These results support the clinical hypothesis that alterations in ocular shape, such as those occurring in myopic eyes, might be associated with pathological alterations in retinal hemodynamics. PMID:26445874

  2. Validation of Methods for Computational Catalyst Design: Geometries, Structures, and Energies of Neutral and Charged Silver Clusters

    SciTech Connect

    Duanmu, Kaining; Truhlar, Donald G.

    2015-04-30

    We report a systematic study of small silver clusters, Agn, Agn+, and Agn–, n = 1–7. We studied all possible isomers of clusters with n = 5–7. We tested 42 exchange–correlation functionals, and we assess these functionals for their accuracy in three respects: geometries (quantitative prediction of internuclear distances), structures (the nature of the lowest-energy structure, for example, whether it is planar or nonplanar), and energies. We find that the ingredients of exchange–correlation functionals are indicators of their success in predicting geometries and structures: local exchange–correlation functionals are generally better than hybrid functionals for geometries; functionals depending on kinetic energy density are the best for predicting the lowest-energy isomer correctly, especially for predicting two-dimensional to three-dimenstional transitions correctly. The accuracy for energies is less sensitive to the ingredient list. Our findings could be useful for guiding the selection of methods for computational catalyst design.

  3. Computations of Viscous Flows in Complex Geometries Using Multiblock Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Ameri, Ali A.

    1995-01-01

    Generating high quality, structured, continuous, body-fitted grid systems (multiblock grid systems) for complicated geometries has long been a most labor-intensive and frustrating part of simulating flows in complicated geometries. Recently, new methodologies and software have emerged that greatly reduce the human effort required to generate high quality multiblock grid systems for complicated geometries. These methods and software require minimal input form the user-typically, only information about the topology of the block structure and number of grid points. This paper demonstrates the use of the new breed of multiblock grid systems in simulations of internal flows in complicated geometries. The geometry used in this study is a duct with a sudden expansion, a partition, and an array of cylindrical pins. This geometry has many of the features typical of internal coolant passages in turbine blades. The grid system used in this study was generated using a commercially available grid generator. The simulations were done using a recently developed flow solver, TRAF3D.MB, that was specially designed to use multiblock grid systems.

  4. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  5. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  6. Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches.

    PubMed

    Rutkowski, Tomasz M

    2016-07-01

    A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification. PMID:27194241

  7. Computer Mediated Learning: An Example of an Approach.

    ERIC Educational Resources Information Center

    Arcavi, Abraham; Hadas, Nurit

    2000-01-01

    There are several possible approaches in which dynamic computerized environments play a significant and possibly unique role in supporting innovative learning trajectories in mathematics in general and geometry in particular. Describes an approach based on a problem situation and some experiences using it with students and teachers. (Contains 15…

  8. A geometric calibration method for inverse geometry computed tomography using P-matrices

    NASA Astrophysics Data System (ADS)

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-03-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  9. Fully Integrated Approach to Compute Vibrationally Resolved Optical Spectra: From Small Molecules to Macrosystems.

    PubMed

    Barone, Vincenzo; Bloino, Julien; Biczysko, Malgorzata; Santoro, Fabrizio

    2009-03-10

    A general and effective time-independent approach to compute vibrationally resolved electronic spectra from first principles has been integrated into the Gaussian computational chemistry package. This computational tool offers a simple and easy-to-use way to compute theoretical spectra starting from geometry optimization and frequency calculations for each electronic state. It is shown that in such a way it is straightforward to combine calculation of Franck-Condon integrals with any electronic computational model. The given examples illustrate the calculation of absorption and emission spectra, all in the UV-vis region, of various systems from small molecules to large ones, in gas as well as in condensed phases. The computational models applied range from fully quantum mechanical descriptions to discrete/continuum quantum mechanical/molecular mechanical/polarizable continuum models. PMID:26610221

  10. Using Dynamic Geometry and Computer Algebra Systems in Problem Based Courses for Future Engineers

    ERIC Educational Resources Information Center

    Tomiczková, Svetlana; Lávicka, Miroslav

    2015-01-01

    It is a modern trend today when formulating the curriculum of a geometric course at the technical universities to start from a real-life problem originated in technical praxis and subsequently to define which geometric theories and which skills are necessary for its solving. Nowadays, interactive and dynamic geometry software plays a more and more…

  11. Computer Metaphors: Approaches to Computer Literacy for Educators.

    ERIC Educational Resources Information Center

    Peelle, Howard A.

    Because metaphors offer ready perspectives for comprehending something new, this document examines various metaphors educators might use to help students develop computer literacy. Metaphors described are the computer as person (a complex system worthy of respect), tool (perhaps the most powerful and versatile known to humankind), brain (both…

  12. High-resolution structure of an HIV zinc fingerlike domain via a new NMR-based distance geometry approach

    SciTech Connect

    Summers, M.F.; South, T.L.; Kim, B. ); Hare, D.R. )

    1990-01-16

    A new method is described for determining molecular structures from NMR data. The approach utilizes 2D NOESY back-calculations to generate simulated spectra for structures obtained from distance geometry (DG) computations. Comparison of experimental and back-calculated spectra, including analysis of cross-peak buildup and auto-peak decay with increasing mixing time, provides a quantitative measure of the consistence between the experimental data and generated structures and allows for use of tighter interproton distance constraints. For the first time, the goodness of the generated structures is evaluated on the basis of their consistence with the actual experimental data rather than on the basis of consistence with other generated structures. This method is applied to the structure determination of an 18-residue peptide with an amino acid sequence comprising the first zinc fingerlike domain from the gag protein p55 of HIV. This is the first structure determination to atomic resolution for a retroviral zinc fingerlike complex. The peptide (Zn(p55F1)) exhibits a novel folding pattern that includes type I and type II NH-S tight turns and is stabilized both by coordination of the three Cys and one His residues to zinc and by extensive internal hydrogen bonding. The backbone folding is significant different from that of a classical DNA-binding zinc finger. The side chains of conservatively substituted Phe and Ile residues implicated in genomic RNA recognition form a hydrophobic patch on the peptide surface.

  13. CasimirSim - A Tool to Compute Casimir Polder Forces for Nontrivial 3D Geometries

    SciTech Connect

    Sedmik, Rene; Tajmar, Martin

    2007-01-30

    The so-called Casimir effect is one of the most interesting macro-quantum effects. Being negligible on the macro-scale it becomes a governing factor below structure sizes of 1 {mu}m where it accounts for typically 100 kN m-2. The force does not depend on gravity, or electric charge but solely on the materials properties, and geometrical shape. This makes the effect a strong candidate for micro(nano)-mechanical devices M(N)EMS. Despite a long history of research the theory lacks a uniform description valid for arbitrary geometries which retards technical application. We present an advanced state-of-the-art numerical tool overcoming all the usual geometrical restrictions, capable of calculating arbitrary 3D geometries by utilizing the Casimir Polder approximation for the Casimir force.

  14. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  15. An analytical approach to bistable biological circuit discrimination using real algebraic geometry.

    PubMed

    Siegal-Gaskins, Dan; Franco, Elisa; Zhou, Tiffany; Murray, Richard M

    2015-07-01

    Biomolecular circuits with two distinct and stable steady states have been identified as essential components in a wide range of biological networks, with a variety of mechanisms and topologies giving rise to their important bistable property. Understanding the differences between circuit implementations is an important question, particularly for the synthetic biologist faced with determining which bistable circuit design out of many is best for their specific application. In this work we explore the applicability of Sturm's theorem--a tool from nineteenth-century real algebraic geometry--to comparing 'functionally equivalent' bistable circuits without the need for numerical simulation. We first consider two genetic toggle variants and two different positive feedback circuits, and show how specific topological properties present in each type of circuit can serve to increase the size of the regions of parameter space in which they function as switches. We then demonstrate that a single competitive monomeric activator added to a purely monomeric (and otherwise monostable) mutual repressor circuit is sufficient for bistability. Finally, we compare our approach with the Routh-Hurwitz method and derive consistent, yet more powerful, parametric conditions. The predictive power and ease of use of Sturm's theorem demonstrated in this work suggest that algebraic geometric techniques may be underused in biomolecular circuit analysis.

  16. Computer-aided evaluation of the railway track geometry on the basis of satellite measurements

    NASA Astrophysics Data System (ADS)

    Specht, Cezary; Koc, Władysław; Chrostowski, Piotr

    2016-05-01

    In recent years, all over the world there has been a period of intensive development of GNSS (Global Navigation Satellite Systems) measurement techniques and their extension for the purpose of their applications in the field of surveying and navigation. Moreover, in many countries a rising trend in the development of rail transportation systems has been noticed. In this paper, a method of railway track geometry assessment based on mobile satellite measurements is presented. The paper shows the implementation effects of satellite surveying railway geometry. The investigation process described in the paper is divided on two phases. The first phase is the GNSS mobile surveying and the analysis obtained data. The second phase is the analysis of the track geometry using the flat coordinates from the surveying. The visualization of the measured route, separation and quality assessment of the uniform geometric elements (straight sections, arcs), identification of the track polygon (main directions and intersection angles) are discussed and illustrated by the calculation example within the article.

  17. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  18. CFD study of natural convection mixing in a steam generator mock-up: Comparison between full geometry and porous media approaches

    SciTech Connect

    Dehbi, A.; Badreddine, H.

    2012-07-01

    In CFD simulations of flow mixing in a steam generator (SG) during natural circulation, one is faced with the problem of representing the thousands of SG U-tubes. Typically simplifications are made to render the problem computationally tractable. In particular, one or a number of tubes are lumped in one volume which is treated as a single porous medium. This approach dramatically reduces the computational size of the problem and hence simulation time. In this work, we endeavor to investigate the adequacy of this approach by performing two separate simulations of flow in a mock-up with 262 U-tubes, i.e. one in which the porous media model is used for the tube bundle, and another in which the full geometry is represented. In both simulations, the Reynolds Stress (RMS) model of turbulence is used. We show that in steady state conditions, the porous media treatment yields results which are comparable to those of the full geometry representation (temperature distribution, recirculation ratio, hot plume spread, etc). Hence, the porous media approach can be extended with a good degree of confidence to the full scale SG. (authors)

  19. Along-strike complex geometry of subduction zones - an experimental approach

    NASA Astrophysics Data System (ADS)

    Midtkandal, I.; Gabrielsen, R. H.; Brun, J.-P.; Huismans, R.

    2012-04-01

    Recent knowledge of the great geometric and dynamic complexity insubduction zones, combined with new capacity for analogue mechanical and numerical modeling has sparked a number of studies on subduction processes. Not unexpectedly, such models reveal a complex relation between physical conditions during subduction initiation, strength profile of the subducting plate, the thermo-dynamic conditions and the subduction zones geometries. One rare geometrical complexity of subduction that remains particularly controversial, is the potential for polarity shift in subduction systems. The present experiments were therefore performed to explore the influence of the architecture, strength and strain velocity on complexities in subduction zones, focusing on along-strike variation of the collision zone. Of particular concern were the consequences for the geometry and kinematics of the transition zones between segments of contrasting subduction direction. Although the model design to some extent was inspired by the configuration along the Iberian - Eurasian suture zone, the results are also of significance for other orogens with complex along-strike geometries. The experiments were set up to explore the initial state of subduction only, and were accordingly terminated before slab subduction occurred. The model wasbuilt from layers of silicone putty and sand, tailored to simulate the assumed lithospheric geometries and strength-viscosity profiles along the plate boundary zone prior to contraction, and comprises two 'continental' plates separated by a thinner 'oceanic' plate that represents the narrow seaway. The experiment floats on a substrate of sodiumpolytungstate, representing mantle. 24 experimental runs were performed, varying the thickness (and thus strength) of the upper mantle lithosphere, as well as the strain rate. Keeping all other parameters identical for each experiment, the models were shortened by a computer-controlled jackscrew while time-lapse images were

  20. Examining the Impact of an Integrative Method of Using Technology on Students' Achievement and Efficiency of Computer Usage and on Pedagogical Procedure in Geometry

    ERIC Educational Resources Information Center

    Gurevich, Irina; Gurev, Dvora

    2012-01-01

    In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…

  1. 3D geometry analysis of the medial meniscus--a statistical shape modeling approach.

    PubMed

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-10-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  2. Measuring Attitudes toward Computers: Two Approaches

    PubMed Central

    Kjerulff, K.H.; Counte, Michael A.

    1984-01-01

    The purpose of this paper is to present two questionnaires developed by the authors to measure attitudes toward computers. The first questionnaire is designed to measure attitudes toward computers in general. The second questionnaire is designed to assess attitudes toward a specific medical information system. Data concerning the reliability of these instruments is presented. Data concerning both the predictive and the concurrent validity of these measures are also presented. The results indicate that scores on both of the questionnaires assessed prior to computer implementation are reliable and valid predictors of post-implementation adaptation to the computer and perceptions of training. Scores on the questionnaires designed to assess specific attitudes, measured at three points in time, are also related to concurrent measures of job satisfaction and adaptation to the computer system.

  3. Tumor growth in complex, evolving microenvironmental geometries: A diffuse domain approach

    PubMed Central

    Chen, Ying; Lowengrub, John S.

    2014-01-01

    We develop a mathematical model of tumor growth in complex, dynamic microenvironments with active, deformable membranes. Using a diffuse domain approach, the complex domain is captured implicitly using an auxiliary function and the governing equations are appropriately modified, extended and solved in a larger, regular domain. The diffuse domain method enables us to develop an efficient numerical implementation that does not depend on the space dimension or the microenvironmental geometry. We model homotypic cell-cell adhesion and heterotypic cell-basement membrane (BM) adhesion with the latter being implemented via a membrane energy that models cell-BM interactions. We incorporate simple models of elastic forces and the degradation of the BM and ECM by tumor-secreted matrix degrading enzymes. We investigate tumor progression and BM response as a function of cell-BM adhesion and the stiffness of the BM. We find tumor sizes tend to be positively correlated with cell-BM adhesion since increasing cell-BM adhesion results in thinner, more elongated tumors. Prior to invasion of the tumor into the stroma, we find a negative correlation between tumor size and BM stiffness as the elastic restoring forces tend to inhibit tumor growth. In order to model tumor invasion of the stroma, we find it necessary to downregulate cell-BM adhesiveness, which is consistent with experimental observations. A stiff BM promotes invasiveness because at early stages the opening in the BM created by MDE degradation from tumor cells tends to be narrower when the BM is stiffer. This requires invading cells to squeeze through the narrow opening and thus promotes fragmentation that then leads to enhanced growth and invasion. In three dimensions, the opening in the BM was found to increase in size even when the BM is stiff because of pressure induced by growing tumor clusters. A larger opening in the BM can increase the potential for further invasiveness by increasing the possibility that additional

  4. Deterministic approach for unsteady rarefied flow simulations in complex geometries and its application to gas flows in microsystems

    NASA Astrophysics Data System (ADS)

    Chigullapalli, Sruti

    Micro-electro-mechanical systems (MEMS) are widely used in automotive, communications and consumer electronics applications with microactuators, micro gyroscopes and microaccelerometers being just a few examples. However, in areas where high reliability is critical, such as in aerospace and defense applications, very few MEMS technologies have been adopted so far. Further development of high frequency microsystems such as resonators, RF MEMS, microturbines and pulsed-detonation microengines require improved understanding of unsteady gas dynamics at the micro scale. Accurate computational simulation of such flows demands new approaches beyond the conventional formulations based on the macroscopic constitutive laws. This is due to the breakdown of the continuum hypothesis in the presence of significant non-equilibrium and rarefaction because of large gradients and small scales, respectively. More generally, the motion of molecules in a gas is described by the kinetic Boltzmann equation which is valid for arbitrary Knudsen numbers. However, due to the multidimensionality of the phase space and the complex non-linearity of the collision term, numerical solution of the Boltzmann equation is challenging for practical problems. In this thesis a fully deterministic, as opposed to a statistical, finite volume based three-dimensional solution of Boltzmann ES-BGK model kinetic equation is formulated to enable simulations of unsteady rarefied flows. The main goal of this research is to develop an unsteady rarefied solver integrated with finite volume method (FVM) solver in MEMOSA (MEMS Overall Simulation Administrator) developed by PRISM: NNSA center for Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM) at Purdue and apply it to study micro-scale gas damping. Formulation and verification of finite volume method for unsteady rarefied flow solver based on Boltzmann-ESBGK equations in arbitrary three-dimensional geometries are presented. The solver is

  5. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  6. A Numerical Approach for Computing Standard Errors of Linear Equating.

    ERIC Educational Resources Information Center

    Zeng, Lingjia

    1993-01-01

    A numerical approach for computing standard errors (SEs) of a linear equating is described in which first partial derivatives of equating functions needed to compute SEs are derived numerically. Numerical and analytical approaches are compared using the Tucker equating method. SEs derived numerically are found indistinguishable from SEs derived…

  7. Effective leaf area index retrieving from terrestrial point cloud data: coupling computational geometry application and Gaussian mixture model clustering

    NASA Astrophysics Data System (ADS)

    Jin, S.; Tamura, M.; Susaki, J.

    2014-09-01

    Leaf area index (LAI) is one of the most important structural parameters of forestry studies which manifests the ability of the green vegetation interacted with the solar illumination. Classic understanding about LAI is to consider the green canopy as integration of horizontal leaf layers. Since multi-angle remote sensing technique developed, LAI obliged to be deliberated according to the observation geometry. Effective LAI could formulate the leaf-light interaction virtually and precisely. To retrieve the LAI/effective LAI from remotely sensed data therefore becomes a challenge during the past decades. Laser scanning technique can provide accurate surface echoed coordinates with densely scanned intervals. To utilize the density based statistical algorithm for analyzing the voluminous amount of the 3-D points data is one of the subjects of the laser scanning applications. Computational geometry also provides some mature applications for point cloud data (PCD) processing and analysing. In this paper, authors investigated the feasibility of a new application for retrieving the effective LAI of an isolated broad leaf tree. Simplified curvature was calculated for each point in order to remove those non-photosynthetic tissues. Then PCD were discretized into voxel, and clustered by using Gaussian mixture model. Subsequently the area of each cluster was calculated by employing the computational geometry applications. In order to validate our application, we chose an indoor plant to estimate the leaf area, the correlation coefficient between calculation and measurement was 98.28 %. We finally calculated the effective LAI of the tree with 6 × 6 assumed observation directions.

  8. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  9. Computational issues of importance to the inverse recovery of epicardial potentials in a realistic heart-torso geometry.

    PubMed

    Messinger-Rapport, B J; Rudy, Y

    1989-11-01

    In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.

  10. Computational Analysis of an effect of aerodynamic pressure on the side view mirror geometry

    NASA Astrophysics Data System (ADS)

    Murukesavan, P.; Mu'tasim, M. A. N.; Sahat, I. M.

    2013-12-01

    This paper describes the evaluation of aerodynamic flow effects on side mirror geometry for a passenger car using ANSYS Fluent CFD simulation software. Results from analysis of pressure coefficient on side view mirror designs is evaluated to analyse the unsteady forces that cause fluctuations to mirror surface and image blurring. The fluctuation also causes drag forces that increase the overall drag coefficient, with an assumption resulting in higher fuel consumption and emission. Three features of side view mirror design were investigated with two input velocity parameters of 17 m/s and 33 m/s. Results indicate that the half-sphere design shows the most effective design with less pressure coefficient fluctuation and drag coefficient.

  11. Changes in root canal geometry after preparation assessed by high-resolution computed tomography.

    PubMed

    Peters, O A; Laib, A; Göhring, T N; Barbakow, F

    2001-01-01

    Root canal morphology changes during canal preparation, and these changes may vary depending on the technique used. Such changes have been studied in vitro by measuring cross-sections of canals before and after preparation. This current study used nondestructive high-resolution scanning tomography to assess changes in the canals' paths after preparation. A microcomputed tomography scanner (cubic resolution 34 microm) was used to analyze 18 canals in 6 extracted maxillary molars. Canals were scanned before and after preparation using either K-Files, Lightspeed, or ProFile .04 rotary instruments. A special mounting device enabled precise repositioning and scanning of the specimens after preparation. Differences in surface area (deltaA in mm2) and volume (deltaV in mm3) of each canal before and after preparation were calculated using custom-made software. deltaV ranged from 0.64 to 2.86, with a mean of 1.61 +/- 0.7, whereas deltaA varied from 0.72 to 9.66, with a mean of 4.16 +/- 2.63. Mean deltaV and deltaA for the K-File, ProFile, and Lightspeed groups were 1.28 +/- 0.57 and 2.58 +/- 1.83; 1.79 +/- 0.66 and 4.86 +/- 2.53; and 1.81 +/- 0.57 and 5.31 +/- 2.98, respectively. Canal anatomy and the effects of preparation were further analyzed using the Structure Model Index and the Transportation of Centers of Mass. Under the conditions of this study variations in canal geometry before preparation had more influence on the changes during preparation than the techniques themselves. Consequently studies comparing the effects of root canal instruments on canal anatomy should also consider details of the preoperative canal geometry. PMID:11487156

  12. Computer-based Approaches to Patient Education

    PubMed Central

    Lewis, Deborah

    1999-01-01

    All articles indexed in MEDLINE or CINAHL, related to the use of computer technology in patient education, and published in peer-reviewed journals between 1971 and 1998 were selected for review. Sixty-six articles, including 21 research-based reports, were identified. Forty-five percent of the studies were related to the management of chronic disease. Thirteen studies described an improvement in knowledge scores or clinical outcomes when computer-based patient education was compared with traditional instruction. Additional articles examined patients' computer experience, socioeconomic status, race, and gender and found no significant differences when compared with program outcomes. Sixteen of the 21 research-based studies had effect sizes greater than 0.5, indicating a significant change in the described outcome when the study subjects participated in computer-based patient education. The findings from this review support computer-based education as an effective strategy for transfer of knowledge and skill development for patients. The limited number of research studies (N = 21) points to the need for additional research. Recommendations for new studies include cost-benefit analysis and the impact of these new technologies on health outcomes over time. PMID:10428001

  13. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  14. Computed tomography in trauma: An atlas approach

    SciTech Connect

    Toombs, B.D.; Sandler, C.

    1986-01-01

    This book discussed computed tomography in trauma. The text is organized according to mechanism of injury and site of injury. In addition to CT, some correlation with other imaging modalities is included. Blunt trauma, penetrating trauma, complications and sequelae of trauma, and use of other modalities are covered.

  15. An Approach to Developing Computer Catalogs

    ERIC Educational Resources Information Center

    MacDonald, Robin W.; Elrod, J. McRee

    1973-01-01

    A method of developing computer catalogs is proposed which does not require unit card conversion but rather the accumulation of data from operating programs. It is proposed that the bibliographic and finding functions of the catalog be separated, with the latter being the first automated. (8 references) (Author)

  16. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  17. CMEIAS JFrad: a digital computing tool to discriminate the fractal geometry of landscape architectures and spatial patterns of individual cells in microbial biofilms.

    PubMed

    Ji, Zhou; Card, Kyle J; Dazzo, Frank B

    2015-04-01

    Image analysis of fractal geometry can be used to gain deeper insights into complex ecophysiological patterns and processes occurring within natural microbial biofilm landscapes, including the scale-dependent heterogeneities of their spatial architecture, biomass, and cell-cell interactions, all driven by the colonization behavior of optimal spatial positioning of organisms to maximize their efficiency in utilization of allocated nutrient resources. Here, we introduce CMEIAS JFrad, a new computing technology that analyzes the fractal geometry of complex biofilm architectures in digital landscape images. The software uniquely features a data-mining opportunity based on a comprehensive collection of 11 different mathematical methods to compute fractal dimension that are implemented into a wizard design to maximize ease-of-use for semi-automatic analysis of single images or fully automatic analysis of multiple images in a batch process. As examples of application, quantitative analyses of fractal dimension were used to optimize the important variable settings of brightness threshold and minimum object size in order to discriminate the complex architecture of freshwater microbial biofilms at multiple spatial scales, and also to differentiate the spatial patterns of individual bacterial cells that influence their cooperative interactions, resource use, and apportionment in situ. Version 1.0 of JFrad is implemented into a software package containing the program files, user manual, and tutorial images that will be freely available at http://cme.msu.edu/cmeias/. This improvement in computational image informatics will strengthen microscopy-based approaches to analyze the dynamic landscape ecology of microbial biofilm populations and communities in situ at spatial resolutions that range from single cells to microcolonies.

  18. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  19. Multicomputer approach to non-numeric computation

    SciTech Connect

    Baru, C.K.

    1985-01-01

    The architecture of a dynamically partitionable multicomputer system with switchable main memory modules (SM3) for non-numeric computations is studied. The architecture supports efficient execution of parallel algorithms for database processing by (i) allowing the sharing of switchable main memory modules between computers, (ii) supporting network partitioning, and (iii) employing global control lines to efficiently support interprocessor communication. The data transfer time is reduced to memory switching time by allowing some main memory modules to be switched between processors. Network partitioning gives a common bus network system to the capability of an MIMD machine while performing global operations. The global control lines establish a quick and efficient hig-level protocol in the system. The network is supervised by a control computer which oversees network partitioning and other functions. A simulation study of some commonly used database operations, using discrete-event simulation techniques, was carried out, and results of the study are presented. Certain aspects of the system architecture were modified on the basis of simulation results. The general-purpose nature of the SM3 system allows the implementation of a variety of parallel algorithms for database processing.

  20. Numerical computations of natural convection heat transfer in irregular geometries: Final technical report

    SciTech Connect

    Glakpe, E.K.

    1987-01-23

    The goal of the research program at Howard University is to develop ad document a general purpose computer code that can be used to obtain flow and heat transfer data for the transport or storage of spent fuel configurations. We believe that this work is relevant to DOE/OCRWM storage and transportation programs for the protection of public health and quality of the environment. The computer code is expected to be used to support primarily the following activities: (a) to obtain heat transfer and flow data for the design of sealed storage casks for transport to, and storage at the proposed MRS facility; (b) to obtain heat transfer and flow data for storage of spent fuel assemblies in pools or transportable metal casks at reactor sites. It is therefore proposed that the research work be continued to modify and add to the BODYFIT-1FE code physical models and applicable equations that will simulate realistic configurations of shipping/storage casks.

  1. A Social Construction Approach to Computer Science Education

    ERIC Educational Resources Information Center

    Machanick, Philip

    2007-01-01

    Computer science education research has mostly focused on cognitive approaches to learning. Cognitive approaches to understanding learning do not account for all the phenomena observed in teaching and learning. A number of apparently successful educational approaches, such as peer assessment, apprentice-based learning and action learning, have…

  2. A Taxonomic Approach to Measuring Achievement in Mathematics 223 - Geometry for Elementary Teachers.

    ERIC Educational Resources Information Center

    Little, Richard A.

    The purpose was to evaluate the effectiveness of a geometry course for preservice elementary teachers and, at the same time, to validate the assumptions that the arrangement of categories in Bloom's "Taxonomy" is hierarchical and cumulative. Sixty-two preservice elementary education majors took an investigator-constructed achievement test after…

  3. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  4. Computational study of pulsatile blood flow in prototype vessel geometries of coronary segments

    PubMed Central

    Chaniotis, A.K.; Kaiktsis, L.; Katritsis, D.; Efstathopoulos, E.; Pantos, I.; Marmarellis, V.

    2010-01-01

    The spatial and temporal distributions of wall shear stress (WSS) in prototype vessel geometries of coronary segments are investigated via numerical simulation, and the potential association with vascular disease and specifically atherosclerosis and plaque rupture is discussed. In particular, simulation results of WSS spatio-temporal distributions are presented for pulsatile, non-Newtonian blood flow conditions for: (a) curved pipes with different curvatures, and (b) bifurcating pipes with different branching angles and flow division. The effects of non-Newtonian flow on WSS (compared to Newtonian flow) are found to be small at Reynolds numbers representative of blood flow in coronary arteries. Specific preferential sites of average low WSS (and likely atherogenesis) were found at the outer regions of the bifurcating branches just after the bifurcation, and at the outer-entry and inner-exit flow regions of the curved vessel segment. The drop in WSS was more dramatic at the bifurcating vessel sites (less than 5% of the pre-bifurcation value). These sites were also near rapid gradients of WSS changes in space and time – a fact that increases the risk of rupture of plaque likely to develop at these sites. The time variation of the WSS spatial distributions was very rapid around the start and end of the systolic phase of the cardiac cycle, when strong fluctuations of intravascular pressure were also observed. These rapid and strong changes of WSS and pressure coincide temporally with the greatest flexion and mechanical stresses induced in the vessel wall by myocardial motion (ventricular contraction). The combination of these factors may increase the risk of plaque rupture and thrombus formation at these sites. PMID:20400349

  5. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  6. Functional quantum computing: An optical approach

    NASA Astrophysics Data System (ADS)

    Rambo, Timothy M.; Altepeter, Joseph B.; Kumar, Prem; D'Ariano, G. Mauro

    2016-05-01

    Recent theoretical investigations treat quantum computations as functions, quantum processes which operate on other quantum processes, rather than circuits. Much attention has been given to the N -switch function which takes N black-box quantum operators as input, coherently permutes their ordering, and applies the result to a target quantum state. This is something which cannot be equivalently done using a quantum circuit. Here, we propose an all-optical system design which implements coherent operator permutation for an arbitrary number of input operators.

  7. One-eyed stereo: a general approach to modeling 3-d scene geometry.

    PubMed

    Strat, T M; Fischler, M A

    1986-06-01

    A single two-dimensional image is an ambiguous representation of the three-dimensional world¿many different scenes could have produced the same image¿yet the human visual system is ex-tremely successful at recovering a qualitatively correct depth model from this type of representation. Workers in the field of computational vision have devised a number of distinct schemes that attempt to emulate this human capability; these schemes are collectively known as ``shape from...'' methods (e.g., shape from shading, shape from texture, or shape from contour). In this paper we contend that the distinct assumptions made in each of these schemes is tantamount to providing a second (virtual) image of the original scene, and that each of these approaches can be translated into a conventional stereo formalism. In particular, we show that it is frequently possible to structure the problem as one of recovering depth from a stereo pair consisting of the supplied perspective image (the original image) and an hypothesized orthographic image (the virtual image). We present a new algorithm of the form required to accomplish this type of stereo reconstruction task. PMID:21869368

  8. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression

    PubMed Central

    Kroos, Julia M.; Diez, Ibai; Cortes, Jesus M.; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine. PMID:26869913

  9. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  10. Computational approaches to homogeneous gold catalysis.

    PubMed

    Faza, Olalla Nieto; López, Carlos Silva

    2015-01-01

    Homogenous gold catalysis has been exploding for the last decade at an outstanding pace. The best described reactivity of Au(I) and Au(III) species is based on gold's properties as a soft Lewis acid, but new reactivity patterns have recently emerged which further expand the range of transformations achievable using gold catalysis, with examples of dual gold activation, hydrogenation reactions, or Au(I)/Au(III) catalytic cycles.In this scenario, to develop fully all these new possibilities, the use of computational tools to understand at an atomistic level of detail the complete role of gold as a catalyst is unavoidable. In this work we aim to provide a comprehensive review of the available benchmark works on methodological options to study homogenous gold catalysis in the hope that this effort can help guide the choice of method in future mechanistic studies involving gold complexes. This is relevant because a representative number of current mechanistic studies still use methods which have been reported as inappropriate and dangerously inaccurate for this chemistry.Together with this, we describe a number of recent mechanistic studies where computational chemistry has provided relevant insights into non-conventional reaction paths, unexpected selectivities or novel reactivity, which illustrate the complexity behind gold-mediated organic chemistry.

  11. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  12. Computational approaches to natural product discovery.

    PubMed

    Medema, Marnix H; Fischbach, Michael A

    2015-09-01

    Starting with the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering longstanding questions in microbial ecology regarding the roles of metabolites in interspecies interactions.

  13. Metabolomics and diabetes: analytical and computational approaches.

    PubMed

    Sas, Kelli M; Karnovsky, Alla; Michailidis, George; Pennathur, Subramaniam

    2015-03-01

    Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200

  14. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  15. Metabolomics and Diabetes: Analytical and Computational Approaches

    PubMed Central

    Sas, Kelli M.; Karnovsky, Alla; Michailidis, George

    2015-01-01

    Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200

  16. A bionic approach to mathematical modeling the fold geometry of deployable reflector antennas on satellites

    NASA Astrophysics Data System (ADS)

    Feng, C. M.; Liu, T. S.

    2014-10-01

    Inspired from biology, this study presents a method for designing the fold geometry of deployable reflectors. Since the space available inside rockets for transporting satellites with reflector antennas is typically cylindrical in shape, and its cross-sectional area is considerably smaller than the reflector antenna after deployment, the cross-sectional area of the folded reflector must be smaller than the available rocket interior space. Membrane reflectors in aerospace are a type of lightweight structure that can be packaged compactly. To design membrane reflectors from the perspective of deployment processes, bionic applications from morphological changes of plants are investigated. Creating biologically inspired reflectors, this paper deals with fold geometry of reflectors, which imitate flower buds. This study uses mathematical formulation to describe geometric profiles of flower buds. Based on the formulation, new designs for deployable membrane reflectors derived from bionics are proposed. Adjusting parameters in the formulation of these designs leads to decreases in reflector area before deployment.

  17. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  18. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  19. Hyperdimensional Computing Approach to Word Sense Disambiguation

    PubMed Central

    Berster, Bjoern-Toby; Goodwin, J Caleb; Cohen, Trevor

    2012-01-01

    Coping with the ambiguous meanings of words has long been a hurdle for information retrieval and natural language processing systems. This paper presents a new word sense disambiguation approach using high-dimensional binary vectors, which encode meanings of words based on the different contexts in which they occur. In our approach, a randomly constructed vector is assigned to each ambiguous term, and another to each sense of this term. In the context of a sense-annotated training set, a reversible vector transformation is used to combine these vectors, such that both the term and the sense assigned to a context in which the term occurs are encoded into vectors representing the surrounding terms in this context. When a new context is encountered, the information required to disambiguate this term is extracted from the trained semantic vectors for the terms in this context by reversing the vector transformation to recover the correct sense of the term. On repeated experiments using ten-fold cross-validation and a standard test set, we obtained results comparable to the best obtained in previous studies. These results demonstrate the potential of our methodology, and suggest directions for future research. PMID:23304389

  20. A correlative microscopy approach relates microtubule behaviour, local organ geometry, and cell growth at the Arabidopsis shoot apical meristem

    PubMed Central

    Burian, Agata; Uyttewaal, Magalie

    2013-01-01

    Cortical microtubules (CMTs) are often aligned in a particular direction in individual cells or even in groups of cells and play a central role in the definition of growth anisotropy. How the CMTs themselves are aligned is not well known, but two hypotheses have been proposed. According to the first hypothesis, CMTs align perpendicular to the maximal growth direction, and, according to the second, CMTs align parallel to the maximal stress direction. Since both hypotheses were formulated on the basis of mainly qualitative assessments, the link between CMT organization, organ geometry, and cell growth is revisited using a quantitative approach. For this purpose, CMT orientation, local curvature, and growth parameters for each cell were measured in the growing shoot apical meristem (SAM) of Arabidopsis thaliana. Using this approach, it has been shown that stable CMTs tend to be perpendicular to the direction of maximal growth in cells at the SAM periphery, but parallel in the cells at the boundary domain. When examining the local curvature of the SAM surface, no strict correlation between curvature and CMT arrangement was found, which implies that SAM geometry, and presumed geometry-derived stress distribution, is not sufficient to prescribe the CMT orientation. However, a better match between stress and CMTs was found when mechanical stress derived from differential growth was also considered. PMID:24153420

  1. A correlative microscopy approach relates microtubule behaviour, local organ geometry, and cell growth at the Arabidopsis shoot apical meristem.

    PubMed

    Burian, Agata; Ludynia, Michal; Uyttewaal, Magalie; Traas, Jan; Boudaoud, Arezki; Hamant, Olivier; Kwiatkowska, Dorota

    2013-12-01

    Cortical microtubules (CMTs) are often aligned in a particular direction in individual cells or even in groups of cells and play a central role in the definition of growth anisotropy. How the CMTs themselves are aligned is not well known, but two hypotheses have been proposed. According to the first hypothesis, CMTs align perpendicular to the maximal growth direction, and, according to the second, CMTs align parallel to the maximal stress direction. Since both hypotheses were formulated on the basis of mainly qualitative assessments, the link between CMT organization, organ geometry, and cell growth is revisited using a quantitative approach. For this purpose, CMT orientation, local curvature, and growth parameters for each cell were measured in the growing shoot apical meristem (SAM) of Arabidopsis thaliana. Using this approach, it has been shown that stable CMTs tend to be perpendicular to the direction of maximal growth in cells at the SAM periphery, but parallel in the cells at the boundary domain. When examining the local curvature of the SAM surface, no strict correlation between curvature and CMT arrangement was found, which implies that SAM geometry, and presumed geometry-derived stress distribution, is not sufficient to prescribe the CMT orientation. However, a better match between stress and CMTs was found when mechanical stress derived from differential growth was also considered. PMID:24153420

  2. A correlative microscopy approach relates microtubule behaviour, local organ geometry, and cell growth at the Arabidopsis shoot apical meristem.

    PubMed

    Burian, Agata; Ludynia, Michal; Uyttewaal, Magalie; Traas, Jan; Boudaoud, Arezki; Hamant, Olivier; Kwiatkowska, Dorota

    2013-12-01

    Cortical microtubules (CMTs) are often aligned in a particular direction in individual cells or even in groups of cells and play a central role in the definition of growth anisotropy. How the CMTs themselves are aligned is not well known, but two hypotheses have been proposed. According to the first hypothesis, CMTs align perpendicular to the maximal growth direction, and, according to the second, CMTs align parallel to the maximal stress direction. Since both hypotheses were formulated on the basis of mainly qualitative assessments, the link between CMT organization, organ geometry, and cell growth is revisited using a quantitative approach. For this purpose, CMT orientation, local curvature, and growth parameters for each cell were measured in the growing shoot apical meristem (SAM) of Arabidopsis thaliana. Using this approach, it has been shown that stable CMTs tend to be perpendicular to the direction of maximal growth in cells at the SAM periphery, but parallel in the cells at the boundary domain. When examining the local curvature of the SAM surface, no strict correlation between curvature and CMT arrangement was found, which implies that SAM geometry, and presumed geometry-derived stress distribution, is not sufficient to prescribe the CMT orientation. However, a better match between stress and CMTs was found when mechanical stress derived from differential growth was also considered.

  3. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  4. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  5. Computational Analysis of Particle Nucleation in Dilution Tunnels: Effect of Flow Configuration and Tunnel Geometry

    NASA Astrophysics Data System (ADS)

    Singh, Satbir; Adams, Peter; Misquitta, Ashwin; Lee, Kyung; Lipsky, Eric; Robinson, Allen

    2013-11-01

    Measurement of fine particle emission from combustion sources is important to understand their health effects, and to develop emissions regulations. Dilution sampling is the most commonly used technique to measure particle number distribution because it simulates the cooling of combustion exhaust with atmospheric air. Experiments suggest that the measured distribution is dependent on the dilution ratio used and the tunnel design. In the present work, computational analysis is performed to investigate the effect of tunnel flow and geometric parameters on H2SO4-H2O binary nucleation inside dilution tunnels using a large-eddy-simulation (LES) based model. Model predictions suggest that the experimental trends are likely due to differences in the level of turbulence inside the tunnels. It is found that the interaction of dilution air and combustion exhaust in the mixing layer greatly impacts the extent of nucleation. In general, a cross-flow configuration with enhanced turbulent mixing leads to greater number of nucleation-mode particles than an axial-flow configuration.

  6. Computational analysis of a rarefied hypersonic flow over combined gap/step geometries

    NASA Astrophysics Data System (ADS)

    Leite, P. H. M.; Santos, W. F. N.

    2015-06-01

    This work describes a computational analysis of a hypersonic flow over a combined gap/step configuration at zero degree angle of attack, in chemical equilibrium and thermal nonequilibrium. Effects on the flowfield structure due to changes on the step frontal-face height have been investigated by employing the Direct Simulation Monte Carlo (DSMC) method. The work focuses the attention of designers of hypersonic configurations on the fundamental parameter of surface discontinuity, which can have an important impact on even initial designs. The results highlight the sensitivity of the primary flowfield properties, velocity, density, pressure, and temperature due to changes on the step frontal-face height. The analysis showed that the upstream disturbance in the gap/step configuration increased with increasing the frontal-face height. In addition, it was observed that the separation region for the gap/step configuration increased with increasing the step frontal-face height. It was found that density and pressure for the gap/step configuration dramatically increased inside the gap as compared to those observed for the gap configuration, i. e., a gap without a step.

  7. Computational approach to the study of thermal spin crossover phenomena

    SciTech Connect

    Rudavskyi, Andrii; Broer, Ria; Sousa, Carmen

    2014-05-14

    The key parameters associated to the thermally induced spin crossover process have been calculated for a series of Fe(II) complexes with mono-, bi-, and tridentate ligands. Combination of density functional theory calculations for the geometries and for normal vibrational modes, and highly correlated wave function methods for the energies, allows us to accurately compute the entropy variation associated to the spin transition and the zero-point corrected energy difference between the low- and high-spin states. From these values, the transition temperature, T{sub 1/2}, is estimated for different compounds.

  8. Aluminium in Biological Environments: A Computational Approach

    PubMed Central

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  9. A Process Education Approach To Teaching Computer Science.

    ERIC Educational Resources Information Center

    Smith, Peter D.

    The driving force of process education is its focus on students'"learning to learn." This paper describes an approach to teaching computer science which includes classroom management; the adaptation of four different courses to follow the process education approach; successes achieved; and students' responses. The courses are conducted in closed…

  10. Evaluation and optimization of the performance of frame geometries for lithium-ion battery application by computer simulation

    NASA Astrophysics Data System (ADS)

    Miranda, D.; Miranda, F.; Costa, C. M.; Almeida, A. M.; Lanceros-Méndez, S.

    2016-06-01

    Tailoring battery geometries is essential for many applications, as geometry influences the delivered capacity value. Two geometries, frame and conventional, have been studied and, for a given scan rate of 330C, the square frame shows a capacity value of 305,52 Ahm-2, which is 527 times higher than the one for the conventional geometry for a constant the area of all components.

  11. Rethinking Connes' approach to the standard model of particle physics via non-commutative geometry

    NASA Astrophysics Data System (ADS)

    Boyle, Latham; Farnsworth, Shane

    2015-04-01

    Connes' non-commutative geometry (NCG) is a generalization of Riemannian geometry that is particularly apt for expressing the standard model of particle physics coupled to Einstein gravity. Recently, we suggested a reformulation of this framework that is: (i) simpler and more unified in its axioms, and (ii) allows the Lagrangian for the standard model of particle physics (coupled to Einstein gravity) to be specified in a way that is tighter and more explanatory than the traditional algorithm based on effective field theory. Here we explain how this same reformulation yields a new perspective on the symmetries of a given NCG. Applying this perspective to the NCG traditionally used to describe the standard model we find, instead, an extension of the standard model by an extra U(1) B - L gauge symmetry, and a single extra complex scalar field σ, which is a singlet under SU(3)C × SU(2)L × U(1)Y , but has B - L = 2 . This field has cosmological implications, and offers a new solution to the discrepancy between the observed Higgs mass and the NCG prediction. We acknowledge support from an NSERC Discovery Grant.

  12. Rethinking Connes’ Approach to the Standard Model of Particle Physics Via Non-Commutative Geometry

    NASA Astrophysics Data System (ADS)

    Farnsworth, Shane; Boyle, Latham

    2015-02-01

    Connes’ non-commutative geometry (NCG) is a generalization of Riemannian geometry that is particularly apt for expressing the standard model of particle physics coupled to Einstein gravity. In a previous paper, we suggested a reformulation of this framework that is: (i) simpler and more unified in its axioms, and (ii) allows the Lagrangian for the standard model of particle physics (coupled to Einstein gravity) to be specified in a way that is tighter and more explanatory than the traditional algorithm based on effective field theory. Here we explain how this same reformulation yields a new perspective on the symmetries of a given NCG. Applying this perspective to the NCG traditionally used to describe the standard model we find, instead, an extension of the standard model by an extra U{{(1)}B-L} gauge symmetry, and a single extra complex scalar field σ, which is a singlet under SU{{(3)}C}× SU{{(2)}L}× U{{(1)}Y}, but has B-L=2. This field has cosmological implications, and offers a new solution to the discrepancy between the observed Higgs mass and the NCG prediction.

  13. Influence of Subducting Plate Geometry on Upper Plate Deformation at Orogen Syntaxes: A Thermomechanical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Nettesheim, Matthias; Ehlers, Todd; Whipp, David

    2016-04-01

    Syntaxes are short, convex bends in the otherwise slightly concave plate boundaries of subduction zones. These regions are of scientific interest because some syntaxes (e.g., the Himalaya or St. Elias region in Alaska) exhibit exceptionally rapid, focused rock uplift. These areas have led to a hypothesized connection between erosional and tectonic processes (top-down control), but have so far neglected the unique 3D geometry of the subducting plates at these locations. In this study, we contribute to this discussion by exploring the idea that subduction geometry may be sufficient to trigger focused tectonic uplift in the overriding plate (a bottom-up control). For this, we use a fully coupled 3D thermomechanical model that includes thermochronometric age prediction. The downgoing plate is approximated as spherical indenter of high rigidity, whereas both viscous and visco-plastic material properties are used to model deformation in the overriding plate. We also consider the influence of the curvature of the subduction zone and the ratio of subduction velocity to subduction zone advance. We evaluate these models with respect to their effect on the upper plate exhumation rates and localization. Results indicate that increasing curvature of the indenter and a stronger upper crust lead to more focused tectonic uplift, whereas slab advance causes the uplift focus to migrate and thus may hinder the emergence of a positive feedback.

  14. Ultrasonic approach for formation of erbium oxide nanoparticles with variable geometries.

    PubMed

    Radziuk, Darya; Skirtach, André; Gessner, Andre; Kumke, Michael U; Zhang, Wei; Möhwald, Helmuth; Shchukin, Dmitry

    2011-12-01

    Ultrasound (20 kHz, 29 W·cm(-2)) is employed to form three types of erbium oxide nanoparticles in the presence of multiwalled carbon nanotubes as a template material in water. The nanoparticles are (i) erbium carboxioxide nanoparticles deposited on the external walls of multiwalled carbon nanotubes and Er(2)O(3) in the bulk with (ii) hexagonal and (iii) spherical geometries. Each type of ultrasonically formed nanoparticle reveals Er(3+) photoluminescence from crystal lattice. The main advantage of the erbium carboxioxide nanoparticles on the carbon nanotubes is the electromagnetic emission in the visible region, which is new and not examined up to the present date. On the other hand, the photoluminescence of hexagonal erbium oxide nanoparticles is long-lived (μs) and enables the higher energy transition ((4)S(3/2)-(4)I(15/2)), which is not observed for spherical nanoparticles. Our work is unique because it combines for the first time spectroscopy of Er(3+) electronic transitions in the host crystal lattices of nanoparticles with the geometry established by ultrasound in aqueous solution of carbon nanotubes employed as a template material. The work can be of great interest for "green" chemistry synthesis of photoluminescent nanoparticles in water.

  15. Analytic reconstruction approach for parallel translational computed tomography.

    PubMed

    Kong, Huihua; Yu, Hengyong

    2015-01-01

    To develop low-cost and low-dose computed tomography (CT) scanners for developing countries, recently a parallel translational computed tomography (PTCT) is proposed, and the source and detector are translated oppositely with respect to the imaging object without a slip-ring. In this paper, we develop an analytic filtered-backprojection (FBP)-type reconstruction algorithm for two dimensional (2D) fan-beam PTCT and extend it to three dimensional (3D) cone-beam geometry in a Feldkamp-type framework. Particularly, a weighting function is constructed to deal with data redundancy for multiple translations PTCT to eliminate image artifacts. Extensive numerical simulations are performed to validate and evaluate the proposed analytic reconstruction algorithms, and the results confirm their correctness and merits. PMID:25882732

  16. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  17. On the Geometry of the Berry-Robbins Approach to Spin-Statistics

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Nikolaos; Reyes-Lega, Andrés F.

    2010-07-01

    Within a geometric and algebraic framework, the structures which are related to the spin-statistics connection are discussed. A comparison with the Berry-Robbins approach is made. The underlying geometric structure constitutes an additional support for this approach. In our work, a geometric approach to quantum indistinguishability is introduced which allows the treatment of singlevaluedness of wave functions in a global, model independent way.

  18. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  19. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  20. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  1. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  3. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  4. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry

    PubMed Central

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623

  5. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry.

    PubMed

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M; Fee, Judy A; Perucchio, Renato; Taber, Larry A

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study.

  6. Higher spin approaches to quantum field theory and (psuedo)-Riemannian geometries

    NASA Astrophysics Data System (ADS)

    Hallowell, Karl Evan

    In this thesis, we study a number of higher spin quantum field theories and some of their algebraic and geometric consequences. These theories apply mostly either over constant curvature or more generally symmetric pseudo-Riemannian manifolds. The first part of this dissertation covers a superalgebra coming from a family of particle models over symmetric spaces. These theories are novel in that the symmetries of the (super)algebra osp( Q|2p) are larger and more elaborate than traditional symmetries. We construct useful (super)algebras related to and generalizing old work by Lichnerowicz and describe their role in developing the geometry of massless models with osp(Q|2 p) symmetry. The result is two practical applications of these (super)algebras: (1) a lunch more concise description of a family of higher spin quantum field theories; and (2) an interesting algebraic probe of underlying background geometries. We also consider massive models over constant curvature spaces. We use a radial dimensional reduction process which converts massless models into massive ones over a lower dimensional space. In our case, we take from the family of theories above the particular free, massless model over flat space associated with sp(2, R ) and derive a massive model. In the process, we develop a novel associative algebra, which is a deformation of the original differential operator algebra associated with the sp(2, R ) model. This algebra is interesting in its own right since its operators realize the representation structure of the sp(2, R ) group. The massive model also has implications for a sequence of unusual, "partially massless" theories. The derivation illuminates how reduced degrees of freedom become manifest in these particular models. Finally, we study a Yang-Mills model using an on-shell Poincare Yang-Mills twist of the Maxwell complex along with a non-minimal coupling. This is a special, higher spin case of a quantum field theory called a Yang-Mills detour complex

  7. The Interpretative Flexibility, Instrumental Evolution, and Institutional Adoption of Mathematical Software in Educational Practice: The Examples of Computer Algebra and Dynamic Geometry

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2008-01-01

    This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and…

  8. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Stanescu, D.; Hussaini, M. Y.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.

  9. Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  10. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  11. Antisolvent crystallization approach to construction of CuI superstructures with defined geometries.

    PubMed

    Kozhummal, Rajeevan; Yang, Yang; Güder, Firat; Küçükbayrak, Umut M; Zacharias, Margit

    2013-03-26

    A facile high-yield production of cuprous iodide (CuI) superstructures is reported by antisolvent crystallization using acetonitrile/water as a solvent/antisolvent couple under ambient conditions. In the presence of trace water, the metastable water droplets act as templates to induce the precipitation of hollow spherical CuI superstructures consisting of orderly aligned building blocks after drop coating. With water in excess in the mixed solution, an instant precipitation of CuI random aggregates takes place due to rapid crystal growth via ion-by-ion attachment induced by a strong antisolvent effect. However, this uncontrolled process can be modified by adding polymer polyvinyl pyrrolidone (PVP) in water to restrict the size of initially formed CuI crystal nuclei through the effective coordination effect of PVP. As a result, CuI superstructures with a cuboid geometry are constructed by gradual self-assembly of the small CuI crystals via oriented attachment. The precipitated CuI superstructures have been used as competent adsorbents to remove organic dyes from the water due to their mesocrystal feature. Besides, the CuI superstructures have been applied either as a self-sacrificial template or only as a structuring template for the flexible design of other porous materials such as CuO and TiO2. This system provides an ideal platform to simultaneously investigate the superstructure formation enforced by antisolvent crystallization with and without organic additives.

  12. Antisolvent crystallization approach to construction of CuI superstructures with defined geometries.

    PubMed

    Kozhummal, Rajeevan; Yang, Yang; Güder, Firat; Küçükbayrak, Umut M; Zacharias, Margit

    2013-03-26

    A facile high-yield production of cuprous iodide (CuI) superstructures is reported by antisolvent crystallization using acetonitrile/water as a solvent/antisolvent couple under ambient conditions. In the presence of trace water, the metastable water droplets act as templates to induce the precipitation of hollow spherical CuI superstructures consisting of orderly aligned building blocks after drop coating. With water in excess in the mixed solution, an instant precipitation of CuI random aggregates takes place due to rapid crystal growth via ion-by-ion attachment induced by a strong antisolvent effect. However, this uncontrolled process can be modified by adding polymer polyvinyl pyrrolidone (PVP) in water to restrict the size of initially formed CuI crystal nuclei through the effective coordination effect of PVP. As a result, CuI superstructures with a cuboid geometry are constructed by gradual self-assembly of the small CuI crystals via oriented attachment. The precipitated CuI superstructures have been used as competent adsorbents to remove organic dyes from the water due to their mesocrystal feature. Besides, the CuI superstructures have been applied either as a self-sacrificial template or only as a structuring template for the flexible design of other porous materials such as CuO and TiO2. This system provides an ideal platform to simultaneously investigate the superstructure formation enforced by antisolvent crystallization with and without organic additives. PMID:23441989

  13. Euclidean Geometry via Programming.

    ERIC Educational Resources Information Center

    Filimonov, Rossen; Kreith, Kurt

    1992-01-01

    Describes the Plane Geometry System computer software developed at the Educational Computer Systems laboratory in Sofia, Bulgaria. The system enables students to use the concept of "algorithm" to correspond to the process of "deductive proof" in the development of plane geometry. Provides an example of the software's capability and compares it to…

  14. CellGeo: A computational platform for the analysis of shape changes in cells with complex geometries

    PubMed Central

    Tsygankov, Denis; Bilancia, Colleen G.; Vitriol, Eric A.; Hahn, Klaus M.

    2014-01-01

    Cell biologists increasingly rely on computer-aided image analysis, allowing them to collect precise, unbiased quantitative results. However, despite great progress in image processing and computer vision, current computational approaches fail to address many key aspects of cell behavior, including the cell protrusions that guide cell migration and drive morphogenesis. We developed the open source MATLAB application CellGeo, a user-friendly computational platform to allow simultaneous, automated tracking and analysis of dynamic changes in cell shape, including protrusions ranging from filopodia to lamellipodia. Our method maps an arbitrary cell shape onto a tree graph that, unlike traditional skeletonization algorithms, preserves complex boundary features. CellGeo allows rigorous but flexible definition and accurate automated detection and tracking of geometric features of interest. We demonstrate CellGeo’s utility by deriving new insights into (a) the roles of Diaphanous, Enabled, and Capping protein in regulating filopodia and lamellipodia dynamics in Drosophila melanogaster cells and (b) the dynamic properties of growth cones in catecholaminergic a–differentiated neuroblastoma cells. PMID:24493591

  15. The method of characteristics and computational fluid dynamics applied to the prediction of underexpanded jet flows in annular geometry

    NASA Astrophysics Data System (ADS)

    Kim, Sangwon

    2005-11-01

    High pressure (3.4 MPa) injection from a shroud valve can improve natural gas engine efficiency by enhancing fuel-air mixing. Since the fuel jet issuing from the shroud valve has a nearly annular jet flow configuration, it is necessary to analyze the annular jet flow to understand the fuel jet behavior in the mixing process and to improve the shroud design for better mixing. The method of characteristics (MOC) was used as the primary modeling algorithm in this work and Computational Fluid Dynamics (CFD) was used primarily to validate the MOC results. A consistent process for dealing with the coalescence of compression characteristic lines into a shock wave during the MOC computation was developed. By the application of shock polar in the pressure-flow angle plane to the incident shock wave for an axisymmetric underexpanded jet and the comparison with the triple point location found in experimental results, it was found that, in the static pressure ratios of 2--50, a triple point of the jet was located at the point where the flow angle after the incident shock became -5° relative to the axis and this point was situated between the von Neumann and detachment criteria on the incident shock. MOC computations of the jet flow with annular geometry were performed for pressure ratios of 10 and 20 with rannulus = 10--50 units, Deltar = 2 units. In this pressure ratio range, the MOC results did not predict a Mach disc in the core flow of the annular jet, but did indicate the formation of a Mach disc where the jet meets the axis of symmetry. The MOC results display the annular jet configurations clearly. Three types of nozzles for application to gas injectors (convergent-divergent nozzle, conical nozzle, and aerospike nozzle) were designed using the MOC and evaluated in on- and off-design conditions using CFD. The average axial momentum per unit mass was improved by 17 to 24% and the average kinetic energy per unit fuel mass was improved by 30 to 80% compared with a standard

  16. Fractal geometry as a new approach for proving nanosimilarity: a reflection note.

    PubMed

    Demetzos, Costas; Pippa, Natassa

    2015-04-10

    Nanosimilars are considered as new medicinal outcomes combining the generic drugs and the nanocarrier as an innovative excipient, in order to evaluate them as final products. They belong to the grey area - concerning the evaluation process - between generic drugs and biosimilar medicinal products. Generic drugs are well documented and a huge number of them are in market, replacing effectively the off-patent drugs. The scientific approach for releasing them to the market is based on bioequivalence studies, which are well documented and accepted by the regulatory agencies. On the other hand, the structural complexity of biological/biotechnology-derived products demands a new approach for the approval process taking into consideration that bioequivalence studies are not considered as sufficient as in generic drugs, and new clinical trials are needed to support their approval process of the product to the market. In proportion, due to technological complexity of nanomedicines, the approaches for proving the statistical identity or the similarity for generic and biosimilar products, respectively, with those of prototypes, are not considered as effective for nanosimilar products. The aim of this note is to propose a complementary approach which can provide realistic evidences concerning the nanosimilarity, based on fractal analysis. This approach is well fit with the structural complexity of nanomedicines and smooths the difficulties for proving the similarity between off-patent and nanosimilar products. Fractal analysis could be considered as the approach that completely characterizes the physicochemical/morphological characteristics of nanosimilar products and could be proposed as a start point for a deep discussion on nanosimilarity.

  17. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    PubMed

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  18. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  19. Connecting Geometry and Chemistry: A Three-Step Approach to Three-Dimensional Thinking

    ERIC Educational Resources Information Center

    Donaghy, Kelley J.; Saxton, Kathleen J.

    2012-01-01

    A three-step active-learning approach is described to enhance the spatial abilities of general chemistry students with respect to three-dimensional molecular drawing and visualization. These activities are used in a medium-sized lecture hall with approximately 150 students in the first semester of the general chemistry course. The first activity…

  20. Diversifying Our Perspectives on Mathematics about Space and Geometry: An Ecocultural Approach

    ERIC Educational Resources Information Center

    Owens, Kay

    2014-01-01

    School mathematics tends to have developed from the major cultures of Asia, the Mediterranean and Europe. However, indigenous cultures in particular may have distinctly different systematic ways of referring to space and thinking mathematically about spatial activity. Their approaches are based on the close link between the environment and…

  1. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation.

    PubMed

    Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina

    2016-06-21

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  2. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation

    NASA Astrophysics Data System (ADS)

    Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina

    2016-06-01

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  3. A Computer Program for the Reactivity and Kinetic Parameters for Two-Dimensional Triangular Geometry by Transport Perturbation Theory.

    1990-04-25

    Version 00 TPTRIA calculates reactivity, effective delayed neutron fractions and mean generation time for two-dimensional triangular geometry on the basis of neutron transport perturbation theory. DIAMANT2 (also designated as CCC-414), is a multigroup two-dimensional discrete ordinates transport code system for triangular and hexagonal geometry which calculates direct and adjoint angular fluxes.

  4. A Computationally Based Approach to Homogenizing Advanced Alloys

    SciTech Connect

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  5. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  6. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    SciTech Connect

    Kim, K; Kim, D; Kim, T; Kang, S; Cho, M; Shin, D; Suh, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array which have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A

  7. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  8. Mathematical analysis of the accordion grating illusion: a differential geometry approach to introduce the 3D aperture problem.

    PubMed

    Yazdanbakhsh, Arash; Gori, Simone

    2011-12-01

    When an observer moves towards a square-wave grating display, a non-rigid distortion of the pattern occurs in which the stripes bulge and expand perpendicularly to their orientation; these effects reverse when the observer moves away. Such distortions present a new problem beyond the classical aperture problem faced by visual motion detectors, one we describe as a 3D aperture problem as it incorporates depth signals. We applied differential geometry to obtain a closed form solution to characterize the fluid distortion of the stripes. Our solution replicates the perceptual distortions and enabled us to design a nulling experiment to distinguish our 3D aperture solution from other candidate mechanisms (see Gori et al. (in this issue)). We suggest that our approach may generalize to other motion illusions visible in 2D displays. PMID:21782387

  9. Mathematical analysis of the accordion grating illusion: a differential geometry approach to introduce the 3D aperture problem.

    PubMed

    Yazdanbakhsh, Arash; Gori, Simone

    2011-12-01

    When an observer moves towards a square-wave grating display, a non-rigid distortion of the pattern occurs in which the stripes bulge and expand perpendicularly to their orientation; these effects reverse when the observer moves away. Such distortions present a new problem beyond the classical aperture problem faced by visual motion detectors, one we describe as a 3D aperture problem as it incorporates depth signals. We applied differential geometry to obtain a closed form solution to characterize the fluid distortion of the stripes. Our solution replicates the perceptual distortions and enabled us to design a nulling experiment to distinguish our 3D aperture solution from other candidate mechanisms (see Gori et al. (in this issue)). We suggest that our approach may generalize to other motion illusions visible in 2D displays.

  10. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  11. Fragment-based approaches and computer-aided drug discovery.

    PubMed

    Rognan, Didier

    2012-01-01

    Fragment-based design has significantly modified drug discovery strategies and paradigms in the last decade. Besides technological advances and novel therapeutic avenues, one of the most significant changes brought by this new discipline has occurred in the minds of drug designers. Fragment-based approaches have markedly impacted rational computer-aided design both in method development and in applications. The present review illustrates the importance of molecular fragments in many aspects of rational ligand design, and discusses how thinking in "fragment space" has boosted computational biology and chemistry. PMID:21710380

  12. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  13. Novel fuzzy feedback linearization strategy for control via differential geometry approach.

    PubMed

    Li, Tzuu-Hseng S; Huang, Chiou-Jye; Chen, Chung-Cheng

    2010-07-01

    The study investigates a novel fuzzy feedback linearization strategy for control. The main contributions of this study are to construct a control strategy such that the resulting closed-loop system is valid for any initial condition with almost disturbance decoupling performance, and develop the feedback linearization design for some class of nonlinear control systems. The feedback linearization control guarantees the almost disturbance decoupling performance and the uniform ultimate bounded stability of the tracking error system. Once the tracking errors are driven to touch the global final attractor with the desired radius, the fuzzy logic control is immediately applied via a human expert's knowledge to improve the convergence rate. One example, which cannot be solved by the first paper on the almost disturbance decoupling problem, is proposed in this paper to exploit the fact that the almost disturbance decoupling and the convergence rate performances are easily achieved by the proposed approach.

  14. Novel fuzzy feedback linearization strategy for control via differential geometry approach.

    PubMed

    Li, Tzuu-Hseng S; Huang, Chiou-Jye; Chen, Chung-Cheng

    2010-07-01

    The study investigates a novel fuzzy feedback linearization strategy for control. The main contributions of this study are to construct a control strategy such that the resulting closed-loop system is valid for any initial condition with almost disturbance decoupling performance, and develop the feedback linearization design for some class of nonlinear control systems. The feedback linearization control guarantees the almost disturbance decoupling performance and the uniform ultimate bounded stability of the tracking error system. Once the tracking errors are driven to touch the global final attractor with the desired radius, the fuzzy logic control is immediately applied via a human expert's knowledge to improve the convergence rate. One example, which cannot be solved by the first paper on the almost disturbance decoupling problem, is proposed in this paper to exploit the fact that the almost disturbance decoupling and the convergence rate performances are easily achieved by the proposed approach. PMID:20347083

  15. Non-invasive Assessment of Lower Limb Geometry and Strength Using Hip Structural Analysis and Peripheral Quantitative Computed Tomography: A Population-Based Comparison.

    PubMed

    Litwic, A E; Clynes, M; Denison, H J; Jameson, K A; Edwards, M H; Sayer, A A; Taylor, P; Cooper, C; Dennison, E M

    2016-02-01

    Hip fracture is the most significant complication of osteoporosis in terms of mortality, long-term disability and decreased quality of life. In the recent years, different techniques have been developed to assess lower limb strength and ultimately fracture risk. Here we examine relationships between two measures of lower limb bone geometry and strength; proximal femoral geometry and tibial peripheral quantitative computed tomography. We studied a sample of 431 women and 488 men aged in the range 59-71 years. The hip structural analysis (HSA) programme was employed to measure the structural geometry of the left hip for each DXA scan obtained using a Hologic QDR 4500 instrument while pQCT measurements of the tibia were obtained using a Stratec 2000 instrument in the same population. We observed strong sex differences in proximal femoral geometry at the narrow neck, intertrochanteric and femoral shaft regions. There were significant (p < 0.001) associations between pQCT-derived measures of bone geometry (tibial width; endocortical diameter and cortical thickness) and bone strength (strength strain index) with each corresponding HSA variable (all p < 0.001) in both men and women. These results demonstrate strong correlations between two different methods of assessment of lower limb bone strength: HSA and pQCT. Validation in prospective cohorts to study associations of each with incident fracture is now indicated.

  16. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  17. An analytical approach to bistable biological circuit discrimination using real algebraic geometry

    PubMed Central

    Siegal-Gaskins, Dan; Franco, Elisa; Zhou, Tiffany; Murray, Richard M.

    2015-01-01

    Biomolecular circuits with two distinct and stable steady states have been identified as essential components in a wide range of biological networks, with a variety of mechanisms and topologies giving rise to their important bistable property. Understanding the differences between circuit implementations is an important question, particularly for the synthetic biologist faced with determining which bistable circuit design out of many is best for their specific application. In this work we explore the applicability of Sturm's theorem—a tool from nineteenth-century real algebraic geometry—to comparing ‘functionally equivalent’ bistable circuits without the need for numerical simulation. We first consider two genetic toggle variants and two different positive feedback circuits, and show how specific topological properties present in each type of circuit can serve to increase the size of the regions of parameter space in which they function as switches. We then demonstrate that a single competitive monomeric activator added to a purely monomeric (and otherwise monostable) mutual repressor circuit is sufficient for bistability. Finally, we compare our approach with the Routh–Hurwitz method and derive consistent, yet more powerful, parametric conditions. The predictive power and ease of use of Sturm's theorem demonstrated in this work suggest that algebraic geometric techniques may be underused in biomolecular circuit analysis. PMID:26109633

  18. Style: A Computational and Conceptual Blending-Based Approach

    NASA Astrophysics Data System (ADS)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  19. Computational approaches for human disease gene prediction and ranking.

    PubMed

    Zhu, Cheng; Wu, Chao; Aronow, Bruce J; Jegga, Anil G

    2014-01-01

    While candidate gene association studies continue to be the most practical and frequently employed approach in disease gene investigation for complex disorders, selecting suitable genes to test is a challenge. There are several computational approaches available for selecting and prioritizing disease candidate genes. A majority of these tools are based on guilt-by-association principle where novel disease candidate genes are identified and prioritized based on either functional or topological similarity to known disease genes. In this chapter we review the prioritization criteria and the algorithms along with some use cases that demonstrate how these tools can be used for identifying and ranking human disease candidate genes.

  20. Computational approaches in modeling spectra of biological chromophores

    NASA Astrophysics Data System (ADS)

    Nemukhin, Alexander V.; Grigorenko, Bella L.; Bochenkova, Anastasia V.; Bravaya, Ksenia B.; Savitsky, Alexander P.

    2008-02-01

    Computational approaches to describe optical spectra of biological chromophores in proteins, in solutions and in the gas phase are discussed. Recently, accurate measurements of spectral properties for the series of chromophores in different media allowed the authors to estimate the positions of the bands with a high accuracy and to challenge theoreticians by stating that the measured S 0-S I transition wavelengths may be used as new benchmark values for the theory. The novel computational approaches based on the multiconfigurational quasidegenerate perturbation theory present the practical means how to adapt the high level methodology for calculations of accurate excitation energies in large biological chromophores. The theory is illustrated for a series of model compounds for which experimental data are available: the retinal molecule in the protonated Shiff-base form, the chromophores from the Green Fluorescent Protein family including the kindling protein asFP595, and the chromophore from the BLUF domain containing photoreceptor proteins.

  1. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  2. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  3. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  4. A computational language approach to modeling prose recall in schizophrenia

    PubMed Central

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W.; Elvevåg, Brita

    2014-01-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122

  5. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  6. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  7. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium

    NASA Astrophysics Data System (ADS)

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-01

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91 kJ/mol and 114.32 kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99 kJ/mol and 117.08 kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29 Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22 Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions

  8. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium.

    PubMed

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-25

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91kJ/mol and 114.32kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99kJ/mol and 117.08kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions when

  9. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  10. Inconsistency in 9 mm bullets: correlation of jacket thickness to post-impact geometry measured with non-destructive X-ray computed tomography.

    PubMed

    Thornby, John; Landheer, Dirk; Williams, Tim; Barnes-Warden, Jane; Fenne, Paul; Norman, Daniel; Attridge, Alex; Williams, Mark A

    2014-01-01

    Fundamental to any ballistic armour standard is the reference projectile to be defeated. Typically, for certification purposes, a consistent and symmetrical bullet geometry is assumed, however variations in bullet jacket dimensions can have far reaching consequences. Traditionally, characteristics and internal dimensions have been analysed by physically sectioning bullets--an approach which is of restricted scope and which precludes subsequent ballistic assessment. The use of a non-destructive X-ray computed tomography (CT) method has been demonstrated and validated (Kumar et al., 2011 [15]); the authors now apply this technique to correlate bullet impact response with jacket thickness variations. A set of 20 bullets (9 mm DM11) were selected for comparison and an image-based analysis method was employed to map jacket thickness and determine the centre of gravity of each specimen. Both intra- and inter-bullet variations were investigated, with thickness variations of the order of 200 μm commonly found along the length of all bullets and angular variations of up to 50 μm in some. The bullets were subsequently impacted against a rigid flat plate under controlled conditions (observed on a high-speed video camera) and the resulting deformed projectiles were re-analysed. The results of the experiments demonstrate a marked difference in ballistic performance between bullets from different manufacturers and an asymmetric thinning of the jacket is observed in regions of pre-impact weakness. The conclusions are relevant for future soft armour standards and provide important quantitative data for numerical model correlation and development. The implications of the findings of the work on the reliability and repeatability of the industry standard V50 ballistic test are also discussed.

  11. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  12. Computational approaches to detect allosteric pathways in transmembrane molecular machines.

    PubMed

    Stolzenberg, Sebastian; Michino, Mayako; LeVine, Michael V; Weinstein, Harel; Shi, Lei

    2016-07-01

    Many of the functions of transmembrane proteins involved in signal processing and transduction across the cell membrane are determined by allosteric couplings that propagate the functional effects well beyond the original site of activation. Data gathered from breakthroughs in biochemistry, crystallography, and single molecule fluorescence have established a rich basis of information for the study of molecular mechanisms in the allosteric couplings of such transmembrane proteins. The mechanistic details of these couplings, many of which have therapeutic implications, however, have only become accessible in synergy with molecular modeling and simulations. Here, we review some recent computational approaches that analyze allosteric coupling networks (ACNs) in transmembrane proteins, and in particular the recently developed Protein Interaction Analyzer (PIA) designed to study ACNs in the structural ensembles sampled by molecular dynamics simulations. The power of these computational approaches in interrogating the functional mechanisms of transmembrane proteins is illustrated with selected examples of recent experimental and computational studies pursued synergistically in the investigation of secondary active transporters and GPCRs. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov.

  13. Computational approaches to phenotyping: high-throughput phenomics.

    PubMed

    Lussier, Yves A; Liu, Yang

    2007-01-01

    The recent completion of the Human Genome Project has made possible a high-throughput "systems approach" for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype-phenotype associations, or "phenomic associations." The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies.

  14. A Computer Code for 2-D Transport Calculations in x-y Geometry Using the Interface Current Method.

    1990-12-01

    Version 00 RICANT performs 2-dimensional neutron transport calculations in x-y geometry using the interface current method. In the interface current method, the angular neutron currents crossing region surfaces are expanded in terms of the Legendre polynomials in the two half-spaces made by the region surfaces.

  15. A GPU-COMPUTING APPROACH TO SOLAR STOKES PROFILE INVERSION

    SciTech Connect

    Harker, Brian J.; Mighell, Kenneth J. E-mail: mighell@noao.edu

    2012-09-20

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  16. A GPU-computing Approach to Solar Stokes Profile Inversion

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.; Mighell, Kenneth J.

    2012-09-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  17. Computational approaches in the design of synthetic receptors - A review.

    PubMed

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology. PMID:27566340

  18. Computing electronic structures: A new multiconfiguration approach for excited states

    SciTech Connect

    Cances, Eric . E-mail: cances@cermics.enpc.fr; Galicher, Herve . E-mail: galicher@cermics.enpc.fr; Lewin, Mathieu . E-mail: lewin@cermic.enpc.fr

    2006-02-10

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H {sub 2} molecule.

  19. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  20. Benchmarking of computer codes and approaches for modeling exposure scenarios

    SciTech Connect

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  1. Computer-based patient record: the essential data set approach.

    PubMed Central

    Moidu, K.; Falsone, J. J.; Nair, S.

    1994-01-01

    The clamor for data to study the impact of care, to evaluate clinical performance and justify resource utilization is increasing. The data in demand normally should exist in the record of a clinical encounter. Advances in information technology and software techniques have provided us with tools to develop and implement computer-based patient record systems. The issues that constrain development are integral issues of clinical medicine, such as the variability in medical data, specialized practice of medicine, and differing demands of the numerous end-users of a medical record. This paper describes an approach to develop a computer-based patient record. The focus is on identification of the essential data set by infological data modeling and its implementation in a commercially available package for a physician's office. PMID:7949970

  2. Identifying pathogenicity islands in bacterial pathogenomics using computational approaches.

    PubMed

    Che, Dongsheng; Hasan, Mohammad Shabbir; Chen, Bernard

    2014-01-13

    High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs). PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms.

  3. Computational approaches for rational design of proteins with novel functionalities.

    PubMed

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  4. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  5. Slide Star: An Approach to Videodisc/Computer Aided Instruction

    PubMed Central

    McEnery, Kevin W.

    1984-01-01

    One of medical education's primary goals is for the student to be proficient in the gross and microscopic identification of disease. The videodisc, with its storage capacity of up to 54,000 photomicrographs is ideally suited to assist in this educational process. “Slide Star” is a method of interactive instruction which is designed for use in any subject where it is essential to identify visual material. The instructional approach utilizes a computer controlled videodisc to display photomicrographs. In the demonstration program, these are slides of normal blood cells. The program is unique in that the instruction is created by the student's commands manipulating the photomicrograph data base. A prime feature is the use of computer generated multiple choice questions to reinforce the learning process.

  6. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    PubMed Central

    Che, Dongsheng; Hasan, Mohammad Shabbir; Chen, Bernard

    2014-01-01

    High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs). PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms. PMID:25437607

  7. Discrete exterior geometry approach to structure-preserving discretization of distributed-parameter port-Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Seslija, Marko; van der Schaft, Arjan; Scherpen, Jacquelien M. A.

    2012-06-01

    This paper addresses the issue of structure-preserving discretization of open distributed-parameter systems with Hamiltonian dynamics. Employing the formalism of discrete exterior calculus, we introduce a simplicial Dirac structure as a discrete analogue of the Stokes-Dirac structure and demonstrate that it provides a natural framework for deriving finite-dimensional port-Hamiltonian systems that emulate their infinite-dimensional counterparts. The spatial domain, in the continuous theory represented by a finite-dimensional smooth manifold with boundary, is replaced by a homological manifold-like simplicial complex and its augmented circumcentric dual. The smooth differential forms, in discrete setting, are mirrored by cochains on the primal and dual complexes, while the discrete exterior derivative is defined to be the coboundary operator. This approach of discrete differential geometry, rather than discretizing the partial differential equations, allows to first discretize the underlying Stokes-Dirac structure and then to impose the corresponding finite-dimensional port-Hamiltonian dynamics. In this manner, a number of important intrinsically topological and geometrical properties of the system are preserved.

  8. A pencil beam approach to proton computed tomography

    SciTech Connect

    Rescigno, Regina Bopp, Cécile; Rousseau, Marc; Brasse, David

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  9. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  10. Computational systems biology approaches to anti-angiogenic cancer therapeutics.

    PubMed

    Finley, Stacey D; Chu, Liang-Hui; Popel, Aleksander S

    2015-02-01

    Angiogenesis is an exquisitely regulated process that is required for physiological processes and is also important in numerous diseases. Tumors utilize angiogenesis to generate the vascular network needed to supply the cancer cells with nutrients and oxygen, and many cancer drugs aim to inhibit tumor angiogenesis. Anti-angiogenic therapy involves inhibiting multiple cell types, molecular targets, and intracellular signaling pathways. Computational tools are useful in guiding treatment strategies, predicting the response to treatment, and identifying new targets of interest. Here, we describe progress that has been made in applying mathematical modeling and bioinformatics approaches to study anti-angiogenic therapeutics in cancer.

  11. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  12. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  13. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    SciTech Connect

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm) and the field integral of the solution (L{sup 2} norm).

  14. Computational approaches to understand cardiac electrophysiology and arrhythmias

    PubMed Central

    Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.

    2012-01-01

    Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409

  15. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications.

  16. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  17. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  18. Computational simulation methodologies for mechanobiological modelling: a cell-centred approach to neointima development in stents.

    PubMed

    Boyle, C J; Lennon, A B; Early, M; Kelly, D J; Lally, C; Prendergast, P J

    2010-06-28

    The design of medical devices could be very much improved if robust tools were available for computational simulation of tissue response to the presence of the implant. Such tools require algorithms to simulate the response of tissues to mechanical and chemical stimuli. Available methodologies include those based on the principle of mechanical homeostasis, those which use continuum models to simulate biological constituents, and the cell-centred approach, which models cells as autonomous agents. In the latter approach, cell behaviour is governed by rules based on the state of the local environment around the cell; and informed by experiment. Tissue growth and differentiation requires simulating many of these cells together. In this paper, the methodology and applications of cell-centred techniques--with particular application to mechanobiology--are reviewed, and a cell-centred model of tissue formation in the lumen of an artery in response to the deployment of a stent is presented. The method is capable of capturing some of the most important aspects of restenosis, including nonlinear lesion growth with time. The approach taken in this paper provides a framework for simulating restenosis; the next step will be to couple it with more patient-specific geometries and quantitative parameter data.

  19. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  20. Does the 4f electron configuration affect molecular geometries? A joint computational, vibrational spectroscopic, and electron diffraction study of dysprosium tribromide.

    PubMed

    Groen, Cornelis Petrus; Varga, Zoltán; Kolonits, Mária; Peterson, Kirk A; Hargittai, Magdolna

    2009-05-01

    The molecular geometry and vibrational frequencies of monomeric and dimeric dysprosium tribromide, DyBr(3) and Dy(2)Br(6), together with the electronic structure of their ground and first few excited-state molecules were determined by high-level computations, electron diffraction, gas-phase infrared, and matrix isolation infrared and Raman spectroscopy. The effect of partially filled 4f orbitals and spin-orbit coupling on their structure was studied by computations. While the geometry of the monomer does not depend on the 4f orbital occupation, the bond angles of the dimer are noticeably influenced by it. The monomer is found to be planar from all methods; the suggested equilibrium bond length of the molecule (r(e)) is 2.591(8) A, while the thermal average distance (r(g)) is 2.606(8) A. Although the gas-phase DyBr(3) molecule is planar, it forms a complex with the matrix molecules in the matrix-isolation spectroscopic experiments, leading to the pyramidalization of the DyBr(3) unit. Our model calculations in this regard also explain the often conflicting results of computations and different experiments about the shape of lanthanide trihalides.

  1. Comparison of phantom and computer-simulated MR images of flow in a convergent geometry: implications for improved two-dimensional MR angiography.

    PubMed

    Siegel, J M; Oshinski, J N; Pettigrew, R I; Ku, D N

    1995-01-01

    The signal loss that occurs in regions of disturbed flow significantly decreases the clinical usefulness of MR angiography in the imaging of diseased arteries. This signal loss is most often attributed to turbulent flow; but on a typical MR angiogram, the signal is lost in the nonturbulent upstream region of the stenosis as well as in the turbulent downstream region. In the current study we used a flow phantom with a forward-facing step geometry to model the upstream region. The flow upstream of the step was convergent, which created high levels of convective acceleration. This region of the flow field contributes to signal loss at the constriction, leading to overestimation of the area of stenosis reduction. A computer program was designed to simulate the image artifacts that would be caused by this geometry in two-dimensional time-of-flight MR angiography. Simulated images were compared with actual phantom images and the flow artifacts were highly correlated. The computer simulation was then used to test the effects of different orders of motion compensation and of fewer pixels per diameter, as would be present in MR angiograms of small arteries. The results indicated that the computational simulation of flow artifacts upstream of the stenosis provides an important tool in the design of optimal imaging sequences for the reduction of signal loss.

  2. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  3. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  4. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  5. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  6. Serial analysis of lumen geometry and hemodynamics in human arteriovenous fistula for hemodialysis using magnetic resonance imaging and computational fluid dynamics.

    PubMed

    He, Yong; Terry, Christi M; Nguyen, Cuong; Berceli, Scott A; Shiu, Yan-Ting E; Cheung, Alfred K

    2013-01-01

    The arteriovenous fistula (AVF) is the preferred form of vascular access for maintenance hemodialysis, but it often fails to mature to become clinically usable, likely due to aberrant hemodynamic forces. A robust pipeline for serial assessment of hemodynamic parameters and subsequent lumen cross-sectional area changes has been developed and applied to a data set from contrast-free MRI of a dialysis patient's AVF collected over a period of months after AVF creation surgery. Black-blood MRI yielded images of AVF lumen geometry, while cine phase-contrast MRI provided volumetric flow rates at the in-flow and out-flow locations. Lumen geometry and flow rates were used as inputs for computational fluid dynamics (CFD) modeling to provide serial wall shear stress (WSS), WSS gradient, and oscillatory shear index (OSI) profiles. The serial AVF lumen geometries were co-registered at 1mm intervals using respective lumen centerlines, with the anastomosis as an anatomical landmark. Lumen enlargement was limited at the vein region near the anastomosis and a downstream vein valve, potentially attributed to the physical inhibition of wall expansion at those sites. This work is the first serial and detail study of lumen and hemodynamic changes in human AVF using MRI and CFD. This novel protocol will be used for a multicenter prospective study to identify critical hemodynamic factors that contribute to AVF maturation failure.

  7. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  8. Proof in Transformation Geometry

    ERIC Educational Resources Information Center

    Bell, A. W.

    1971-01-01

    The first of three articles showing how inductively-obtained results in transformation geometry may be organized into a deductive system. This article discusses two approaches to enlargement (dilatation), one using coordinates and the other using synthetic methods. (MM)

  9. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  10. A general computational approach for repeat protein design.

    PubMed

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Seetharaman, Jayaraman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T; Hunt, John; Baker, David

    2015-01-30

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  11. A General Computational Approach for Repeat Protein Design

    PubMed Central

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Jayaraman, Seetharaman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T.; Hunt, John; Baker, David

    2014-01-01

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1 Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  12. Systems approaches to computational modeling of the oral microbiome

    PubMed Central

    Dimitrov, Dimiter V.

    2013-01-01

    Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet—oral microbiome—host mucosal transcriptome interactions. In particular, we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, and human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders. PMID:23847548

  13. Computational approaches to study the effects of small genomic variations.

    PubMed

    Khafizov, Kamil; Ivanov, Maxim V; Glazova, Olga V; Kovalenko, Sergei P

    2015-10-01

    Advances in DNA sequencing technologies have led to an avalanche-like increase in the number of gene sequences deposited in public databases over the last decade as well as the detection of an enormous number of previously unseen nucleotide variants therein. Given the size and complex nature of the genome-wide sequence variation data, as well as the rate of data generation, experimental characterization of the disease association of each of these variations or their effects on protein structure/function would be costly, laborious, time-consuming, and essentially impossible. Thus, in silico methods to predict the functional effects of sequence variations are constantly being developed. In this review, we summarize the major computational approaches and tools that are aimed at the prediction of the functional effect of mutations, and describe the state-of-the-art databases that can be used to obtain information about mutation significance. We also discuss future directions in this highly competitive field.

  14. Computational Diagnostic: A Novel Approach to View Medical Data.

    SciTech Connect

    Mane, K. K.; Börner, K.

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious choices about patient

  15. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  16. Ab initio analysis of the energy and geometry during the rearrangement of cyclopentadienylboranes and the evaluation of the DAPSIC computer tutorial

    NASA Astrophysics Data System (ADS)

    Hill, Brian David

    1999-11-01

    The equilibrium and transition state geometries of the degenerate 1,5-sigmatropic rearrangement of cyclopentadienylborane, cyclopentadienyldifluroborane, cyclopentadienyldichloroborane, pentamethylcyclopentadienylborane, pentamethylcyclopentadienyldifluroborane, and pentarnethylcyclopentadienyldichloroborane were optimized using ab initio (RHF/3-21G*, RHF/6-31G*, RMP2/3-21G*, and RMP2/6-31G*) calculations. Activation energies were predicted and compared with previously published experimental data [P. Jutzi, B. Krato, M. Hursthouse, A. J. Howes, Chem. Ber. (1987), 120, 565--574.] The molecule optimized to an asymmetrical geometry with the boron atom shifted away from its symmetric h1 position and toward one of the two neighboring carbons. This geometry was predicted for each molecule at each level of theory except for C 5H5BH2 at the RMP2/6-31G* level and C5H 5BH2 at the RHF/6-31G* level. This geometry was also predicted for bis(pentamethylcyclopentadienyl)fluoroborane. Also, a computer aided instruction program called DAPSIC was evaluated for effectiveness in introductory college chemistry classes at MTSU. DAPSIC was designed to teach unit conversions using the factor-label method (also known as dimensional analysis or unit analysis.) Student performance on a brief quiz before and after using DAPSIC was compared with student performance on a brief quiz before and after doing an equivalent worksheet assignment. In the chemistry class intended for non-majors, the improvement in the quiz scores of students who used DAPSIC was significantly greater than the improvement in the quiz scores of students who used the worksheet. No significant difference was seen in the chemistry major's introductory class. In both classes, students over age 22 who used DAPSIC also showed significantly greater improvement over students age 22 who used the worksheet.

  17. Molecular interactions and crystal packing in nematogen: Computational thermodynamic approach

    NASA Astrophysics Data System (ADS)

    Lakshmi Praveen, P.; Ojha, Durga P.

    2011-10-01

    A computational thermodynamic approach of molecular interactions in a nematogen p-n-alkyl benzoic acid ( nBAC) molecule with an alkyl group butyl (4BAC) has been carried out with respect to translational and orientational motion. The atomic net charge and dipole moment at each atomic center were evaluated using the complete neglect differential overlap (CNDO/2) method. The modified Rayleigh-Schrödinger perturbation theory along with multicentered-multipole expansion method were employed to evaluate long-range intermolecular interactions, while a 6-exp potential function was assumed for short-range interactions. Various possible geometrical arrangements of molecular pairs with regard to different energy components were considered, and the energetically favorable configuration was found to understand the crystal packing picture. Furthermore, these interaction energy values are taken as input to calculate the configurational entropy at room temperature (300 K), nematic-isotropic transition temperature (386 K) and above transition temperature (450 K) during different modes of interactions. An attempt has been made to describe interactions in a nematogen at molecular level, through which one can simplify the system to make the model computationally feasible in understanding the delicate interplay between energy and entropy, that accounts for mesomorphism and there by to analyze the molecular structure of a nematogen.

  18. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.

  19. Towards scalable quantum communication and computation: Novel approaches and realizations

    NASA Astrophysics Data System (ADS)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  20. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen.

  1. Computational approaches to protein inference in shotgun proteomics.

    PubMed

    Li, Yong Fuga; Radivojac, Predrag

    2012-01-01

    Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programming and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300

  2. Computational approach in estimating the need of ditch network maintenance

    NASA Astrophysics Data System (ADS)

    Lauren, Ari; Hökkä, Hannu; Launiainen, Samuli; Palviainen, Marjo; Repo, Tapani; Leena, Finer; Piirainen, Sirpa

    2015-04-01

    Ditch network maintenance (DNM), implemented annually in 70 000 ha area in Finland, is the most controversial of all forest management practices. Nationwide, it is estimated to increase the forest growth by 1…3 million m3 per year, but simultaneously to cause 65 000 tons export of suspended solids and 71 tons of phosphorus (P) to water courses. A systematic approach that allows simultaneous quantification of the positive and negative effects of DNM is required. Excess water in the rooting zone slows the gas exchange and decreases biological activity interfering with the forest growth in boreal forested peatlands. DNM is needed when: 1) the excess water in the rooting zone restricts the forest growth before the DNM, and 2) after the DNM the growth restriction ceases or decreases, and 3) the benefits of DNM are greater than the caused adverse effects. Aeration in the rooting zone can be used as a drainage criterion. Aeration is affected by several factors such as meteorological conditions, tree stand properties, hydraulic properties of peat, ditch depth, and ditch spacing. We developed a 2-dimensional DNM simulator that allows the user to adjust these factors and to evaluate their effect on the soil aeration at different distance from the drainage ditch. DNM simulator computes hydrological processes and soil aeration along a water flowpath between two ditches. Applying daily time step it calculates evapotranspiration, snow accumulation and melt, infiltration, soil water storage, ground water level, soil water content, air-filled porosity and runoff. The model performance in hydrology has been tested against independent high frequency field monitoring data. Soil aeration at different distance from the ditch is computed under steady-state assumption using an empirical oxygen consumption model, simulated air-filled porosity, and diffusion coefficient at different depths in soil. Aeration is adequate and forest growth rate is not limited by poor aeration if the

  3. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  4. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  5. Quasi-relativistic modeltotential approach. Spin-orbit effects on energies and geometries of several di- and tri-atomic molecules

    NASA Astrophysics Data System (ADS)

    Hafner, P.; Habitz, P.; Ishikawa, Y.; Wechsel-Trakowski, E.; Schwarz, W. H. E.

    1981-06-01

    Calculations on ground and valence-excited states of Au +2, Tl 2 and Pb 2, and on the ground states of HgCl 2, PbCl 2 and PbH 2 have teen performed within the Kramers-restricteu self-consistent-field approach using a quasi-relativitistic model-potential hamiltonian. The influence of spin—orbit coupling on molecular orbitals, bond energies and geometries is discussed.

  6. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    NASA Astrophysics Data System (ADS)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  7. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    SciTech Connect

    Granovsky, Alexander A.

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  8. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    NASA Astrophysics Data System (ADS)

    Granovsky, Alexander A.

    2015-12-01

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  9. A computational approach to mechanistic and predictive toxicology of pesticides.

    PubMed

    Kongsbak, Kristine; Vinggaard, Anne Marie; Hadrup, Niels; Audouze, Karine

    2014-01-01

    Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes of action in humans, to group them according to their modes of action, and to hypothesize on their potential effects on human health. We extracted human proteins associated to prochloraz, tebuconazole, epoxiconazole, procymidone, and mancozeb and enriched each protein set by using a high confidence human protein interactome. Then, we explored modes of action of the chemicals, by integrating protein-disease information to the resulting protein networks. The dominating human adverse effects affected were reproductive disorders followed by adrenal diseases. Our results indicated that prochloraz, tebuconazole, and procymidone exerted their effects mainly via interference with steroidogenesis and nuclear receptors. Prochloraz was associated to a large number of human diseases, and together with tebuconazole showed several significant associations to Testicular Dysgenesis Syndrome. Mancozeb showed a differential mode of action, involving inflammatory processes. This method provides an efficient way of overviewing data and grouping chemicals according to their mode of action and potential human adverse effects. Such information is valuable when dealing with predictions of mixture effects of chemicals and may contribute to the development of adverse outcome pathways. PMID:24037280

  10. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  11. A systems approach to computer-based training

    NASA Technical Reports Server (NTRS)

    Drape, Gaylen W.

    1994-01-01

    This paper describes the hardware and software systems approach used in the Automated Recertification Training System (ARTS), a Phase 2 Small Business Innovation Research (SBIR) project for NASA Kennedy Space Center (KSC). The goal of this project is to optimize recertification training of technicians who process the Space Shuttle before launch by providing computer-based training courseware. The objectives of ARTS are to implement more effective CBT applications identified through a need assessment process and to provide an ehanced courseware production system. The system's capabilities are demonstrated by using five different pilot applications to convert existing classroom courses into interactive courseware. When the system is fully implemented at NASA/KSC, trainee job performance will improve and the cost of courseware development will be lower. Commercialization of the technology developed as part of this SBIR project is planned for Phase 3. Anticipated spin-off products include custom courseware for technical skills training and courseware production software for use by corporate training organizations of aerospace and other industrial companies.

  12. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors

    PubMed Central

    Gayvert, Kaitlyn; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark; Tatonetti, Nicholas P.; Rickman, David; Elemento, Olivier

    2016-01-01

    Summary Mutations in transcription factors (TFs) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a Computational drug-Repositioning Approach For Targeting Transcription factor activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions and a global drug-protein network analysis further supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently over-expressed oncogenic TF predicted that dexamethasone would inhibit ERG activity. Indeed, dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of Electronic Medical Record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy to identify drugs that specifically modulate TF activity. PMID:27264179

  13. Lexical is as lexical does: computational approaches to lexical representation

    PubMed Central

    Woollams, Anna M.

    2015-01-01

    In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204

  14. Teaching of Geometry in Bulgaria

    ERIC Educational Resources Information Center

    Bankov, Kiril

    2013-01-01

    Geometry plays an important role in the school mathematics curriculum all around the world. Teaching of geometry varies a lot (Hoyls, Foxman, & Kuchemann, 2001). Many countries revise the objectives, the content, and the approaches to the geometry in school. Studies of the processes show that there are not common trends of these changes…

  15. Computer vision approach to detect colonic polyps in computed tomographic colonography

    NASA Astrophysics Data System (ADS)

    McKenna, Matthew T.; Wang, Shijun; Nguyen, Tan B.; Burns, Joseph E.; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M.

    2012-03-01

    In this paper, we present evaluation results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) algorithm. Inspired by the interpretative methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. First, we generated an initial list of polyp candidates using an existing CAD system. For each of these candidates, we created a video composed of a series of intraluminal, volume-rendered images focusing on the candidate from multiple viewpoints. These videos illustrated the shape of the polyp candidate and gathered contextual information of diagnostic importance. We calculated the histogram of oriented gradients (HOG) feature on each frame of the video and utilized a support vector machine for classification. We tested our method by analyzing a CTC data set of 50 patients from three medical centers. Our proposed video analysis method for polyp classification showed significantly better performance than an approach using only the 2D CT slice data. The areas under the ROC curve for these methods were 0.88 (95% CI: [0.84, 0.91]) and 0.80 (95% CI: [0.75, 0.84]) respectively (p=0.0005).

  16. The Computer in Library Education: One School's Approach.

    ERIC Educational Resources Information Center

    Drott, M. Carl

    The increasing presence and use of computers in libraries has brought about the more frequent introduction of computers and their uses into library education. The Drexel University Graduate School of Library Science has introduced the computer into the curriculum more through individual experimentation and innovation than by planned development.…

  17. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  18. Computational approaches to stochastic systems in physics and biology

    NASA Astrophysics Data System (ADS)

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  19. Computational Study on Subdural Cortical Stimulation - The Influence of the Head Geometry, Anisotropic Conductivity, and Electrode Configuration

    PubMed Central

    Kim, Donghyeon; Seo, Hyeon; Kim, Hyoung-Ihl; Jun, Sung Chan

    2014-01-01

    Subdural cortical stimulation (SuCS) is a method used to inject electrical current through electrodes beneath the dura mater, and is known to be useful in treating brain disorders. However, precisely how SuCS must be applied to yield the most effective results has rarely been investigated. For this purpose, we developed a three-dimensional computational model that represents an anatomically realistic brain model including an upper chest. With this computational model, we investigated the influence of stimulation amplitudes, electrode configurations (single or paddle-array), and white matter conductivities (isotropy or anisotropy). Further, the effects of stimulation were compared with two other computational models, including an anatomically realistic brain-only model and the simplified extruded slab model representing the precentral gyrus area. The results of voltage stimulation suggested that there was a synergistic effect with the paddle-array due to the use of multiple electrodes; however, a single electrode was more efficient with current stimulation. The conventional model (simplified extruded slab) far overestimated the effects of stimulation with both voltage and current by comparison to our proposed realistic upper body model. However, the realistic upper body and full brain-only models demonstrated similar stimulation effects. In our investigation of the influence of anisotropic conductivity, model with a fixed ratio (1∶10) anisotropic conductivity yielded deeper penetration depths and larger extents of stimulation than others. However, isotropic and anisotropic models with fixed ratios (1∶2, 1∶5) yielded similar stimulation effects. Lastly, whether the reference electrode was located on the right or left chest had no substantial effects on stimulation. PMID:25229673

  20. Developing framework to constrain the geometry of the seismic rupture plane on subduction interfaces a priori - A probabilistic approach

    USGS Publications Warehouse

    Hayes, G.P.; Wald, D.J.

    2009-01-01

    A key step in many earthquake source inversions requires knowledge of the geometry of the fault surface on which the earthquake occurred. Our knowledge of this surface is often uncertain, however, and as a result fault geometry misinterpretation can map into significant error in the final temporal and spatial slip patterns of these inversions. Relying solely on an initial hypocentre and CMT mechanism can be problematic when establishing rupture characteristics needed for rapid tsunami and ground shaking estimates. Here, we attempt to improve the quality of fast finite-fault inversion results by combining several independent and complementary data sets to more accurately constrain the geometry of the seismic rupture plane of subducting slabs. Unlike previous analyses aimed at defining the general form of the plate interface, we require mechanisms and locations of the seismicity considered in our inversions to be consistent with their occurrence on the plate interface, by limiting events to those with well-constrained depths and with CMT solutions indicative of shallow-dip thrust faulting. We construct probability density functions about each location based on formal assumptions of their depth uncertainty and use these constraints to solve for the ‘most-likely’ fault plane. Examples are shown for the trench in the source region of the Mw 8.6 Southern Sumatra earthquake of March 2005, and for the Northern Chile Trench in the source region of the November 2007 Antofagasta earthquake. We also show examples using only the historic catalogues in regions without recent great earthquakes, such as the Japan and Kamchatka Trenches. In most cases, this method produces a fault plane that is more consistent with all of the data available than is the plane implied by the initial hypocentre and CMT mechanism. Using the aggregated data sets, we have developed an algorithm to rapidly determine more accurate initial fault plane geometries for source inversions of future

  1. 3D Reconstruction of Chick Embryo Vascular Geometries Using Non-invasive High-Frequency Ultrasound for Computational Fluid Dynamics Studies.

    PubMed

    Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai

    2015-11-01

    Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.

  2. Novel Approaches in Astrocyte Protection: from Experimental Methods to Computational Approaches.

    PubMed

    Garzón, Daniel; Cabezas, Ricardo; Vega, Nelson; Ávila-Rodriguez, Marcos; Gonzalez, Janneth; Gómez, Rosa Margarita; Echeverria, Valentina; Aliev, Gjumrakch; Barreto, George E

    2016-04-01

    Astrocytes are important for normal brain functioning. Astrocytes are metabolic regulators of the brain that exert many functions such as the preservation of blood-brain barrier (BBB) function, clearance of toxic substances, and generation of antioxidant molecules and growth factors. These functions are fundamental to sustain the function and survival of neurons and other brain cells. For these reasons, the protection of astrocytes has become relevant for the prevention of neuronal death during brain pathologies such as Parkinson's disease, Alzheimer's disease, stroke, and other neurodegenerative conditions. Currently, different strategies are being used to protect the main astrocytic functions during neurological diseases, including the use of growth factors, steroid derivatives, mesenchymal stem cell paracrine factors, nicotine derivatives, and computational biology tools. Moreover, the combined use of experimental approaches with bioinformatics tools such as the ones obtained through system biology has allowed a broader knowledge in astrocytic protection both in normal and pathological conditions. In the present review, we highlight some of these recent paradigms in assessing astrocyte protection using experimental and computational approaches and discuss how they could be used for the study of restorative therapies for the brain in pathological conditions.

  3. Computational approaches for analyzing the mechanics of atherosclerotic plaques: a review.

    PubMed

    Holzapfel, Gerhard A; Mulvihill, John J; Cunnane, Eoghan M; Walsh, Michael T

    2014-03-01

    Vulnerable and stable atherosclerotic plaques are heterogeneous living materials with peculiar mechanical behaviors depending on geometry, composition, loading and boundary conditions. Computational approaches have the potential to characterize the three-dimensional stress/strain distributions in patient-specific diseased arteries of different types and sclerotic morphologies and to estimate the risk of plaque rupture which is the main trigger of acute cardiovascular events. This review article attempts to summarize a few finite element (FE) studies for different vessel types, and how these studies were performed focusing on the used stress measure, inclusion of residual stress, used imaging modality and material model. In addition to histology the most used imaging modalities are described, the most common nonlinear material models and the limited number of models for plaque rupture used for such studies are provided in more detail. A critical discussion on stress measures and threshold stress values for plaque rupture used within the FE studies emphasizes the need to develop a more location and tissue-specific threshold value, and a more appropriate failure criterion. With this addition future FE studies should also consider more advanced strain-energy functions which then fit better to location and tissue-specific experimental data.

  4. Absorption and Emission Spectra of a Flexible Dye in Solution: a Computational Time-Dependent Approach

    PubMed Central

    Monti, Susanna; Prampolini, Giacomo; Barone, Vincenzo

    2015-01-01

    The spectroscopic properties of the organic chromophore 4-naphthoyloxy-1-methoxy-2,2,6,6-tetramethylpiperidine (NfO-TEMPO-Me) in toluene solution are explored through an integrated computational strategy combining a classical dynamic sampling with a quantum mechanical description within the framework of the time-dependent density functional theory (TDDFT) approach. The atomistic simulations are based on an accurately parametrized force field, specifically designed to represent the conformational behavior of the molecule in its ground and bright excited states, whereas TDDFT calculations are performed through a selected combination of hybrid functionals and basis sets to obtain optical spectra closely matching the experimental findings. Solvent effects, crucial to obtain good accuracy, are taken into account through explicit molecules and polarizable continuum descriptions. Although, in the case of toluene, specific solvation is not fundamental, the detailed conformational sampling in solution has confirmed the importance of a dynamic description of the molecular geometry for a reliable description of the photophysical properties of the dye. The agreement between theoretical and experimental data is established and a robust protocol for the prediction of the optical behaviour of flexible fluorophores in solution is set. PMID:26504457

  5. A computational intelligence approach to the Mars Precision Landing problem

    NASA Astrophysics Data System (ADS)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  6. Mutations that Cause Human Disease: A Computational/Experimental Approach

    SciTech Connect

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which can be used to

  7. A new approach to tag design in dolphin telemetry: Computer simulations to minimise deleterious effects

    NASA Astrophysics Data System (ADS)

    Pavlov, V. V.; Wilson, R. P.; Lucke, K.

    2007-02-01

    Remote-sensors and transmitters are powerful devices for studying cetaceans at sea. However, despite substantial progress in microelectronics and miniaturisation of systems, dolphin tags are imperfectly designed; additional drag from tags increases swim costs, compromises swimming capacity and manoeuvrability, and leads to extra loads on the animal's tissue. We propose a new approach to tag design, elaborating basic principles and incorporating design stages to minimise device effects by using computer-aided design. Initially, the operational conditions of the device are defined by quantifying the shape, hydrodynamics and range of the natural deformation of the dolphin body at the tag attachment site (such as close to the dorsal fin). Then, parametric models of both of the dorsal fin and a tag are created using the derived data. The link between parameters of the fin and a tag model allows redesign of tag models according to expected changes of fin geometry (difference in fin shape related with species, sex, and age peculiarities, simulation of the bend of the fin during manoeuvres). A final virtual modelling stage uses iterative improvement of a tag model in a computer fluid dynamics (CFD) environment to enhance tag performance. This new method is considered as a suitable tool of tag design before creation of the physical model of a tag and testing with conventional wind/water tunnel technique. Ultimately, tag materials are selected to conform to the conditions identified by the modelling process and thus help create a physical model of a tag, which should minimise its impact on the animal carrier and thus increase the reliability and quality of the data obtained.

  8. A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.

    1991-01-01

    The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.

  9. The Computer Connection: Four Approaches to Microcomputer Laboratory Interfacing.

    ERIC Educational Resources Information Center

    Graef, Jean L.

    1983-01-01

    Four ways in which microcomputers can be turned into laboratory instruments are discussed. These include adding an analog/digital (A/D) converter on a printed circuit board, adding an external A/D converter using the computer's serial port, attaching transducers to the game paddle ports, or connecting an instrument to the computer. (JN)

  10. Computers and Education: The Wrong Approach Continually Being Executed

    ERIC Educational Resources Information Center

    Walker, Jacob J.

    2005-01-01

    This opinion paper explores the use of computers in the U.S. Public Education System, concluding that technology is underutilized and not fulfilled most of the educational promises attributed to the tool. Drawing upon research and personal experience, the paper explores 8 possible reasons for the problem, including: the computer itself; not enough…

  11. Computers and the Humanities Courses: Philosophical Bases and Approach.

    ERIC Educational Resources Information Center

    Ide, Nancy M.

    1987-01-01

    Discusses a Vassar College workshop and the debate it generated over the depth and breadth of computer knowledge needed by humanities students. Describes two positions: the "Holistic View," which emphasizes the understanding of the formal methods of computer implementation; and the "Expert Users View," which sees the humanist as a "user" of…

  12. Analysis of Children's Computational Errors: A Qualitative Approach

    ERIC Educational Resources Information Center

    Engelhardt, J. M.

    1977-01-01

    This study was designed to replicate and extend Roberts' (1968) efforts at classifying computational errors. 198 elementary school students were administered an 84-item arithmetic computation test. Eight types of errors were described which led to several tentative generalizations. (Editor/RK)

  13. Teaching Computer Science: A Problem Solving Approach that Works.

    ERIC Educational Resources Information Center

    Allan, V. H.; Kolesar, M. V.

    The typical introductory programming course is not an appropriate first computer science course for many students. Initial experiences with programming are often frustrating, resulting in a low rate of successful completion, and focus on syntax rather than providing a representative picture of computer science as a discipline. The paper discusses…

  14. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  15. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  16. Constraining Viewing Geometries of Pulsars with Single-Peaked Gamma-ray Profiles Using a Multiwavelength Approach

    NASA Technical Reports Server (NTRS)

    Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.

    2012-01-01

    Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).

  17. What Computational Approaches Should be Taught for Physics?

    NASA Astrophysics Data System (ADS)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  18. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  19. Reflections on John Monaghan's "Computer Algebra, Instrumentation, and the Anthropological Approach"

    ERIC Educational Resources Information Center

    Blume, Glen

    2007-01-01

    Reactions to John Monaghan's "Computer Algebra, Instrumentation and the Anthropological Approach" focus on a variety of issues related to the ergonomic approach (instrumentation) and anthropological approach to mathematical activity and practice. These include uses of the term technique; several possibilities for integration of the two approaches;…

  20. Computational Approaches for Translational Clinical Research in Disease Progression

    PubMed Central

    McGuire, Mary F.; Iyengar, M. Sriram; Mercer, David W.

    2011-01-01

    Today, there is an ever-increasing amount of biological and clinical data available that could be used to enhance a systems-based understanding of disease progression through innovative computational analysis. In this paper we review a selection of published research regarding computational methodologies, primarily from systems biology, that support translational research from the molecular level to the bedside, with a focus on applications in trauma and critical care. Trauma is the leading cause of mortality in Americans under 45 years of age, and its rapid progression offers both opportunities and challenges for computational analysis of trends in molecular patterns associated with outcomes and therapeutic interventions. This review presents methods and domain-specific examples that may inspire the development of new algorithms and computational methods that utilize both molecular and clinical data for diagnosis, prognosis and therapy in disease progression. PMID:21712727

  1. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  2. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    NASA Astrophysics Data System (ADS)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  3. Computational challenges of structure-based approaches applied to HIV.

    PubMed

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  4. Molecular Geometry.

    ERIC Educational Resources Information Center

    Desseyn, H. O.; And Others

    1985-01-01

    Compares linear-nonlinear and planar-nonplanar geometry through the valence-shell electron pairs repulsion (V.S.E.P.R.), Mulliken-Walsh, and electrostatic force theories. Indicates that although the V.S.E.P.R. theory has more advantages for elementary courses, an explanation of the best features of the different theories offers students a better…

  5. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  6. Common Geometry Module

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  7. Comparison of theoretical approaches for computing the bond length alternation of polymethineimine

    NASA Astrophysics Data System (ADS)

    Jacquemin, Denis; Perpète, Eric A.; Chermette, Henry; Ciofini, Ilaria; Adamo, Carlo

    2007-01-01

    Using electron-correlated wavefunction approaches and several pure and hybrid density functionals combined with three atomic basis sets, we have optimized the ground-state geometry of increasingly long polymethineimine oligomers presenting all- trans and gliding-plane symmetries. It turns out that MP2 bond length alternations (BLA) are in good agreement with higher-order electron-correlated wavefunction approaches, whereas, for both conformers, large qualitative and quantitative discrepancies between MP2 and DFT geometries have been found. Indeed, all the selected GGA, meta-GGA and hybrid functionals tend to overestimate bond length equalization in extended polymethineimine structures. On the other hand, self-interaction corrections included in the ADSIC framework provide, in this particular case, a more efficient approach to predict the BLA for medium-size oligomers.

  8. A Computational Approach to Qualitative Analysis in Large Textual Datasets

    PubMed Central

    Evans, Michael S.

    2014-01-01

    In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398

  9. A computational approach to qualitative analysis in large textual datasets.

    PubMed

    Evans, Michael S

    2014-01-01

    In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern.

  10. Geometry of trigonal boron coordination sphere in boronic acids derivatives - a bond-valence vector model approach.

    PubMed

    Czerwińska, Karolina; Madura, Izabela D; Zachara, Janusz

    2016-04-01

    The systematic analysis of the geometry of three-coordinate boron in boronic acid derivatives with a common [CBO2] skeleton is presented. The study is based on the bond-valence vector (BVV) model [Zachara (2007). Inorg. Chem. 46, 9760-9767], a simple tool for the identification and quantitative estimation of both steric and electronic factors causing deformations of the coordination sphere. The empirical bond-valence (BV) parameters in the exponential equation [Brown & Altermatt (1985). Acta Cryst. B41, 244-247] rij and b, for B-O and B-C bonds were determined using data deposited in the Cambridge Structural Database. The values obtained amount to rBO = 1.364 Å, bBO = 0.37 Å, rBC = 1.569 Å, bBC = 0.28 Å, and they were further used in the calculation of BVV lengths. The values of the resultant BVV were less than 0.10 v.u. for 95% of the set comprising 897 [CBO2] fragments. Analysis of the distribution of BVV components allowed for the description of subtle in- and out-of plane deviations from the `ideal' (sp(2)) geometry of boron coordination sphere. The distortions specific for distinct groups of compounds such as boronic acids, cyclic and acyclic esters, benzoxaboroles and hemiesters were revealed. In cyclic esters the direction of strains was found to be controlled by the ring size effect. It was shown that the syn or anti location of substituents on O atoms is decisive for the deformations direction for both acids and acyclic esters. The greatest strains were observed in the case of benzoxaboroles which showed the highest deviation from the zero value of the resultant BVV. The out-of-plane distortions, described by the vz component of the resultant BVV, were ascertained to be useful in the identification of weak secondary interactions on the fourth coordination site of the boron centre. PMID:27048726

  11. A user`s guide for BREAKUP: A computer code for parallelizing the overset grid approach

    SciTech Connect

    Barnette, D.W.

    1998-04-01

    In this user`s guide, details for running BREAKUP are discussed. BREAKUP allows the widely used overset grid method to be run in a parallel computer environment to achieve faster run times for computational field simulations over complex geometries. The overset grid method permits complex geometries to be divided into separate components. Each component is then gridded independently. The grids are computationally rejoined in a solver via interpolation coefficients used for grid-to-grid communications of boundary data. Overset grids have been in widespread use for many years on serial computers, and several well-known Navier-Stokes flow solvers have been extensively developed and validated to support their use. One drawback of serial overset grid methods has been the extensive compute time required to update flow solutions one grid at a time. Parallelizing the overset grid method overcomes this limitation by updating each grid or subgrid simultaneously. BREAKUP prepares overset grids for parallel processing by subdividing each overset grid into statically load-balanced subgrids. Two-dimensional examples with sample solutions, and three-dimensional examples, are presented.

  12. Target Detection Using Fractal Geometry

    NASA Technical Reports Server (NTRS)

    Fuller, J. Joseph

    1991-01-01

    The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.

  13. Artificial Intelligence Approaches to Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Bregar, William S.; Farley, Arthur M.

    1980-01-01

    Explores how new, operational models of cognition processing developed in Artificial Intelligence (AI) can be applied in computer assisted instruction (CAI) systems. CAI systems are surveyed in terms of their goals and formalisms, and a model for the development of a tutorial CAI system for algebra problem solving is introduced. (Author)

  14. A "Service-Learning Approach" to Teaching Computer Graphics

    ERIC Educational Resources Information Center

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  15. Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.

    ERIC Educational Resources Information Center

    Putnam, A. R.; Duelm, Brian

    This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…

  16. A Computer-Assisted Instructional Approach to Teaching Applied Therapeutics.

    ERIC Educational Resources Information Center

    Jim, Lucia K.; And Others

    1984-01-01

    The effectiveness of computer-assisted instruction to conduct pharmacy therapeutics case analysis exercises was compared with the traditional conference format. Pre- and posttest scores were compared statistically within and between groups to determine knowledge gained and comparative effectivness of teaching. (Author/MLW)

  17. Lattice algebra approach to single-neuron computation.

    PubMed

    Ritter, G X; Urcid, G

    2003-01-01

    Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.

  18. Linguistics, Computers, and the Language Teacher. A Communicative Approach.

    ERIC Educational Resources Information Center

    Underwood, John H.

    This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…

  19. Multiple von Neumann computers: an evolutionary approach to functional emergence.

    PubMed

    Suzuki, H

    1997-01-01

    A novel system composed of multiple von Neumann computers and an appropriate problem environment is proposed and simulated. Each computer has a memory to store the machine instruction program, and when a program is executed, a series of machine codes in the memory is sequentially decoded, leading to register operations in the central processing unit (CPU). By means of these operations, the computer not only can handle its generally used registers but also can read and write the environmental database. Simulation is driven by genetic algorithms (GAs) performed on the population of program memories. Mutation and crossover create program diversity in the memory, and selection facilitates the reproduction of appropriate programs. Through these evolutionary operations, advantageous combinations of machine codes are created and fixed in the population one by one, and the higher function, which enables the computer to calculate an appropriate number from the environment, finally emerges in the program memory. In the latter half of the article, the performance of GAs on this system is studied. Under different sets of parameters, the evolutionary speed, which is determined by the time until the domination of the final program, is examined and the conditions for faster evolution are clarified. At an intermediate mutation rate and at an intermediate population size, crossover helps create novel advantageous sets of machine codes and evidently accelerates optimization by GAs.

  20. A New Approach: Computer-Assisted Problem-Solving Systems

    ERIC Educational Resources Information Center

    Gok, Tolga

    2010-01-01

    Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…

  1. Traditional versus Computer-Mediated Approaches of Teaching Educational Measurement

    ERIC Educational Resources Information Center

    Alkharusi, Hussain; Kazem, Ali; Al-Musawai, Ali

    2010-01-01

    Research suggests that to adequately prepare teachers for the task of classroom assessment, attention should be given to the educational measurement instruction. In addition, the literature indicates that the use of computer-mediated instruction has the potential to affect student knowledge, skills, and attitudes. This study compared the effects…

  2. An Interdisciplinary, Computer-Centered Approach to Active Learning.

    ERIC Educational Resources Information Center

    Misale, Judi M.; And Others

    1996-01-01

    Describes a computer-assisted, interdisciplinary course in decision making developed to promote student participation and critical thinking. Students participate in 20 interactive exercises that utilize and illustrate psychological and economic concepts. Follow-up activities include receiving background information, group discussions, text…

  3. Computational Modelling and Simulation Fostering New Approaches in Learning Probability

    ERIC Educational Resources Information Center

    Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid

    2006-01-01

    Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…

  4. Computer-Assisted Mathematics--A Model Approach.

    ERIC Educational Resources Information Center

    Bitter, Gary G.

    1987-01-01

    Discussion of need for improved mathematics education of preservice teachers focuses on a model program, the Mathematics Fitness Project, that includes a computer-generated testing system, management system, and remediation system. Use of the system to improve mathematics skills and attitudes of college students and post high school adults is…

  5. A Functional Analytic Approach to Computer-Interactive Mathematics

    ERIC Educational Resources Information Center

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…

  6. Countering Deterministic Tools: A Critical Theory Approach to Computers & Composition.

    ERIC Educational Resources Information Center

    Kimme Hea, Amy C.

    A writing instructor has grappled with how both to integrate and to complicate critical perspectives on technology in the writing classroom. In collaboration with another instructor, a computer classroom pedagogy was constructed emphasizing imperatives of cultural studies practice as outlined by James Berlin. The pedagogy is similar to Berlin's…

  7. An algebraic approach to computer program design and memory management

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2008-03-01

    Beginning with an algebra of multi-dimensional arrays and following a set of reduction rules embodying a calculus of array indices, we translate (in a mechanizable way) from the high-level mathematics of any array-based problem and a machine specification to a mathematically-optimized implementation. Raynolds and Mullin introduced the name Conformal Computing,^to describe this process that will be discussed in the context of data transforms such as the Fast Fourier, Wavelet Transforms and QR decomposition. We discuss the discovery that the access patterns of the Wavelet Transform form a sufficiently regular subset of those for our cache-optimized FFT so that we can be assured of achieving similar efficiency improvements to the Wavelet Transform as those that were found for the FFT. We present recent results in which careful attention to reproducible computational experiments in a dedicated/non-shared environment is demonstrated to be essential in order to optimally measure the response of the system (in this case the computer itself is the object of study) so as to be able to optimally tune the algorithm to the numerous cost functions associated with all of the elements of the memory/disk/network hierarchy. ^ The name Conformal Computing is protected: 2003, The Research Foundation, State University of New York.

  8. Several Approaches to a Basic Problem in Computer Graphics.

    ERIC Educational Resources Information Center

    Smart, James R.

    1988-01-01

    Considers the problem of how to go from three coordinates to two. States that it is not a problem that mathematics students ordinarily encounter. Presents four methods of graphing three-dimensional figures on a two-dimensional computer screen. (PK)

  9. The Cognitive Approach to Computer Courseware Design and Evaluation.

    ERIC Educational Resources Information Center

    Jay, Timothy B.

    1983-01-01

    Focuses on five human information processing abilities which cognitive psychologists anticipate must be accounted for in order to develop good computer courseware--memory and attention; language or text characteristics; graphics and visual processing; cognitive characteristics of user; feedback to users. A 31-item bibliography is included. (EJS)

  10. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be

  11. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  12. Unit cell geometry of 3-D braided structures

    NASA Technical Reports Server (NTRS)

    Du, Guang-Wu; Ko, Frank K.

    1993-01-01

    The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.

  13. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis

    PubMed Central

    Fokas, Alexander S.; Cole, Daniel J.; Ahnert, Sebastian E.; Chin, Alex W.

    2016-01-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function. PMID:27623708

  14. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis.

    PubMed

    Fokas, Alexander S; Cole, Daniel J; Ahnert, Sebastian E; Chin, Alex W

    2016-01-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function.

  15. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis

    NASA Astrophysics Data System (ADS)

    Fokas, Alexander S.; Cole, Daniel J.; Ahnert, Sebastian E.; Chin, Alex W.

    2016-09-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function.

  16. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis.

    PubMed

    Fokas, Alexander S; Cole, Daniel J; Ahnert, Sebastian E; Chin, Alex W

    2016-01-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function. PMID:27623708

  17. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    SciTech Connect

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  18. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  19. Alternative cosmology from cusp geometries

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Herbin Stalder Díaz, Diego

    We study an alternative geometrical approach on the problem of classical cosmological singularity. It is based on a generalized function f(x,y)=x(2+y^2=(1-z)z^n) which consists of a cusped projected coupled isosurface. Such a projected geometry is computed and analized into the context of Friedmann singularity-free cosmology where a pre-big bang scenario is considered. Assuming that the mechanism of cusp formation is described by non-linear oscillations of a pre- big bang extended very high energy density field (>3x10^{94} kg/m^3$), we show that the action under the gravitational field follows a tautochrone of revolution, understood here as the primary projected geometry that alternatively replaces the Friedmann singularity in the standard big bang theory. As shown here this new approach allows us to interpret the nature of both matter and dark energy from first geometric principles [1]. [1] Rosa et al. DOI: 10.1063/1.4756991

  20. Computational Approaches to Viral Evolution and Rational Vaccine Design

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  1. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  2. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    PubMed

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  3. An engineering based approach for hydraulic computations in river flows

    NASA Astrophysics Data System (ADS)

    Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.

    2016-06-01

    This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).

  4. Component-based approach to robot vision for computational efficiency

    NASA Astrophysics Data System (ADS)

    Lee, Junhee; Kim, Dongsun; Park, Yeonchool; Park, Sooyong; Lee, Sukhan

    2007-12-01

    The purpose of this paper is to show merit and feasibility of the component based approach in robot system integration. Many methodologies such as 'component based approach, 'middle ware based approach' are suggested to integrate various complex functions on robot system efficiently. However, these methodologies are not used to robot function development broadly, because these 'Top-down' methodologies are modeled and researched in software engineering field, which are different from robot function researches, so that cannot be trusted by function developers. Developers' the main concern of these methodologies is the performance decreasing, which origins from overhead of a framework. This paper overcomes this misunderstanding by showing time performance increasing, when an experiment uses 'Self Healing, Adaptive and Growing softwarE (SHAGE)' framework, one of the component based framework. As an example of real robot function, visual object recognition is chosen to experiment.

  5. Computational approach for calculating bound states in quantum field theory

    NASA Astrophysics Data System (ADS)

    Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.

    2016-09-01

    We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.

  6. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    NASA Astrophysics Data System (ADS)

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    2016-10-01

    We present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to the proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. While these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.

  7. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    NASA Astrophysics Data System (ADS)

    Sathish, Kuppani; RamaMohan Reddy, A.

    2016-06-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  8. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    PubMed

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  9. One Approach to Computer Modeling of Microkinetics for Biotechnological Processes

    NASA Astrophysics Data System (ADS)

    Kabiljanov, A. S.; Fozilova, M. M.; Kuchkarov, F. S.

    Intelligence — algorithmic method of modeling of microkinetics for biotechnological processes (BTP) is considered. The generalized mathematical model of microkinetics of biotechnological processes received on the basis of the system approach is given. The question of regularization of a task of parametrical identification is considered.

  10. Synergy between experimental and computational approaches to homogeneous photoredox catalysis.

    PubMed

    Demissie, Taye B; Hansen, Jørn H

    2016-07-01

    In this Frontiers article, we highlight how state-of-the-art density functional theory calculations can contribute to the field of homogeneous photoredox catalysis. We discuss challenges in the fields and potential solutions to be found at the interface between theory and experiment. The exciting opportunities and insights that can arise through such an interdisciplinary approach are highlighted.

  11. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  12. Novel Approaches to Adaptive Angular Approximations in Computational Transport

    SciTech Connect

    Marvin L. Adams; Igor Carron; Paul Nelson

    2006-06-04

    The particle-transport equation is notoriously difficult to discretize accurately, largely because the solution can be discontinuous in every variable. At any given spatial position and energy E, for example, the transport solution  can be discontinuous at an arbitrary number of arbitrary locations in the direction domain. Even if the solution is continuous it is often devoid of smoothness. This makes the direction variable extremely difficult to discretize accurately. We have attacked this problem with adaptive discretizations in the angle variables, using two distinctly different approaches. The first approach used wavelet function expansions directly and exploited their ability to capture sharp local variations. The second used discrete ordinates with a spatially varying quadrature set that adapts to the local solution. The first approach is very different from that in today’s transport codes, while the second could conceivably be implemented in such codes. Both approaches succeed in reducing angular discretization error to any desired level. The work described and results presented in this report add significantly to the understanding of angular discretization in transport problems and demonstrate that it is possible to solve this important long-standing problem in deterministic transport. Our results show that our adaptive discrete-ordinates (ADO) approach successfully: 1) Reduces angular discretization error to user-selected “tolerance” levels in a variety of difficult test problems; 2) Achieves a given error with significantly fewer unknowns than non-adaptive discrete ordinates methods; 3) Can be implemented within standard discrete-ordinates solution techniques, and thus could generate a significant impact on the field in a relatively short time. Our results show that our adaptive wavelet approach: 1) Successfully reduces the angular discretization error to arbitrarily small levels in a variety of difficult test problems, even when using the

  13. Complicated Geometry

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Using NASA SBIR funding, CFD Research Corporation has developed CFD-GEOM, an extension of traditional computer-aided drawing (CAD) software. CFD-GEOM can provide modeling and interactivity of computational fluid dynamics (CFD) latest field, mesh-generation and allows for quick and easy updating of a grid in response to changes in the CAD model.

  14. The effective molarity (EM)--a computational approach.

    PubMed

    Karaman, Rafik

    2010-08-01

    The effective molarity (EM) for 12 intramolecular S(N)2 processes involving the formation of substituted aziridines and substituted epoxides were computed using ab initio and DFT calculation methods. Strong correlation was found between the calculated effective molarity and the experimentally determined values. This result could open a door for obtaining EM values for intramolecular processes that are difficult to be experimentally provided. Furthermore, the calculation results reveal that the driving forces for ring-closing reactions in the two different systems are proximity orientation of the nucleophile to the electrophile and the ground strain energies of the products and the reactants.

  15. New approaches for computer analysis of nucleic acid sequences.

    PubMed

    Karlin, S; Ghandour, G; Ost, F; Tavare, S; Korn, L J

    1983-09-01

    A new high-speed computer algorithm is outlined that ascertains within and between nucleic acid and protein sequences all direct repeats, dyad symmetries, and other structural relationships. Large repeats, repeats of high frequency, dyad symmetries of specified stem length and loop distance, and their distributions are determined. Significance of homologies is assessed by a hierarchy of permutation procedures. Applications are made to papovaviruses, the human papillomavirus HPV, lambda phage, the human and mouse mitochondrial genomes, and the human and mouse immunoglobulin kappa-chain genes. PMID:6577449

  16. A complex systems approach to computational molecular biology

    SciTech Connect

    Lapedes, A. |

    1993-09-01

    We report on the containing research program at Santa Fe Institute that applies complex systems methodology to computational molecular biology. Two aspects are stressed here are the use of co-evolving adaptive neutral networks for determining predictable protein structure classifications, and the use of information theory to elucidate protein structure and function. A ``snapshot`` of the current state of research in these two topics is presented, representing the present state of two major research thrusts in the program of Genetic Data and Sequence Analysis at the Santa Fe Institute.

  17. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  18. Computational Approaches to Enhance Nanosafety and Advance Nanomedicine

    NASA Astrophysics Data System (ADS)

    Mendoza, Eduardo R.

    With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

  19. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    NASA Astrophysics Data System (ADS)

    Johansen, Stein E.

    2014-12-01

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci "atoms" and "molecules" consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a "positive approach". In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  20. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    SciTech Connect

    Johansen, Stein E.

    2014-12-10

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci 'atoms' and 'molecules' consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a 'positive approach'. In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  1. Cognitive control in majority search: a computational modeling approach.

    PubMed

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function.

  2. Computational Approaches to Toll-Like Receptor 4 Modulation.

    PubMed

    Billod, Jean-Marc; Lacetera, Alessandra; Guzmán-Caldentey, Joan; Martín-Santamaría, Sonsoles

    2016-01-01

    Toll-like receptor 4 (TLR4), along with its accessory protein myeloid differentiation factor 2 (MD-2), builds a heterodimeric complex that specifically recognizes lipopolysaccharides (LPS), which are present on the cell wall of Gram-negative bacteria, activating the innate immune response. Some TLR4 modulators are undergoing preclinical and clinical evaluation for the treatment of sepsis, inflammatory diseases, cancer and rheumatoid arthritis. Since the relatively recent elucidation of the X-ray crystallographic structure of the extracellular domain of TLR4, research around this fascinating receptor has risen to a new level, and thus, new perspectives have been opened. In particular, diverse computational techniques have been applied to decipher some of the basis at the atomic level regarding the mechanism of functioning and the ligand recognition processes involving the TLR4/MD-2 system at the atomic level. This review summarizes the reported molecular modeling and computational studies that have recently provided insights into the mechanism regulating the activation/inactivation of the TLR4/MD-2 system receptor and the key interactions modulating the molecular recognition process by agonist and antagonist ligands. These studies have contributed to the design and the discovery of novel small molecules with promising activity as TLR4 modulators. PMID:27483231

  3. Computational approaches for inferring the functions of intrinsically disordered proteins

    PubMed Central

    Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter

    2015-01-01

    Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226

  4. Computational approaches to understanding protein aggregation in neurodegeneration

    PubMed Central

    Redler, Rachel L.; Shirvanyants, David; Dagliyan, Onur; Ding, Feng; Kim, Doo Nam; Kota, Pradeep; Proctor, Elizabeth A.; Ramachandran, Srinivas; Tandon, Arpit; Dokholyan, Nikolay V.

    2014-01-01

    The generation of toxic non-native protein conformers has emerged as a unifying thread among disorders such as Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis. Atomic-level detail regarding dynamical changes that facilitate protein aggregation, as well as the structural features of large-scale ordered aggregates and soluble non-native oligomers, would contribute significantly to current understanding of these complex phenomena and offer potential strategies for inhibiting formation of cytotoxic species. However, experimental limitations often preclude the acquisition of high-resolution structural and mechanistic information for aggregating systems. Computational methods, particularly those combine both all-atom and coarse-grained simulations to cover a wide range of time and length scales, have thus emerged as crucial tools for investigating protein aggregation. Here we review the current state of computational methodology for the study of protein self-assembly, with a focus on the application of these methods toward understanding of protein aggregates in human neurodegenerative disorders. PMID:24620031

  5. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  6. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  7. Asynchronous event-based hebbian epipolar geometry.

    PubMed

    Benosman, Ryad; Ieng, Sio-Hoï; Rogister, Paul; Posch, Christoph

    2011-11-01

    Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor. PMID:21954205

  8. Asynchronous event-based hebbian epipolar geometry.

    PubMed

    Benosman, Ryad; Ieng, Sio-Hoï; Rogister, Paul; Posch, Christoph

    2011-11-01

    Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.

  9. Freezing in confined geometries

    NASA Technical Reports Server (NTRS)

    Sokol, P. E.; Ma, W. J.; Herwig, K. W.; Snow, W. M.; Wang, Y.; Koplik, Joel; Banavar, Jayanth R.

    1992-01-01

    Results of detailed structural studies, using elastic neutron scattering, of the freezing of liquid O2 and D2 in porous vycor glass, are presented. The experimental studies have been complemented by computer simulations of the dynamics of freezing of a Lennard-Jones liquid in narrow channels bounded by molecular walls. Results point to a new simple physical interpretation of freezing in confined geometries.

  10. A dynamical-systems approach for computing ice-affected streamflow

    USGS Publications Warehouse

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  11. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach

    PubMed Central

    Eslami-Mossallam, Behrouz; Schram, Raoul D.; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  12. A computational approach to the twin paradox in curved spacetime

    NASA Astrophysics Data System (ADS)

    Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng

    2016-09-01

    Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  13. Computational biology approach to uncover hepatitis C virus helicase operation.

    PubMed

    Flechsig, Holger

    2014-04-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings.

  14. Embodied cognition of movement decisions: a computational modeling approach.

    PubMed

    Johnson, Joseph G

    2009-01-01

    This chapter presents a cognitive computational view of decision making as the search for, and accumulation of, evidence for options under consideration. It is based on existing models that have been successful in traditional decision tasks involving preferential choice. The model assumes shifting attention over time that determines momentary inputs to an evolving preference state. In this chapter, the cognitive model is extended to illustrate how links from the motor system may be incorporated. These links can basically be categorized into one of three influences: modifying the subjective evaluation of choice options, restricting attention, and altering the options that are to be found in the choice set. The implications for the formal model are introduced and preliminary evidence is drawn from the extant literature.

  15. Open-ended approaches to science assessment using computers

    NASA Astrophysics Data System (ADS)

    Singley, Mark K.; Taft, Hessy L.

    1995-03-01

    We discuss the potential role of technology in evaluating learning outcomes in large-scale, widespread science assessments of the kind typically done at ETS, such as the GRE, or the College Board SAT II Subject Tests. We describe the current state-of-the-art in this area, as well as briefly outline the history of technology in large-scale science assessment and ponder possibilities for the future. We present examples from our own work in the domain of chemistry, in which we are designing problem solving interfaces and scoring programs for stoichiometric and other kinds of quantitative problem solving. We also present a new scientific reasoning item type that we are prototyping on the computer. It is our view that the technological infrastructure for large-scale constructed response science assessment is well on its way to being available, although many technical and practical hurdles remain.

  16. Computational biology approach to uncover hepatitis C virus helicase operation

    PubMed Central

    Flechsig, Holger

    2014-01-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings. PMID:24707123

  17. Photonic reservoir computing: a new approach to optical information processing

    NASA Astrophysics Data System (ADS)

    Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2010-06-01

    Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently, advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks that has been successfully used in several pattern classification problems, like speech and image recognition. Thus far, most implementations have been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it to conventional software implementations using a benchmark speech recognition task. In spite of the differences with classical reservoir models, the performance of our photonic reservoir is comparable to that of conventional implementations and sometimes slightly better. As our implementation uses coherent light for information processing, we find that phase tuning is crucial to obtain high performance. In parallel we investigate the use of a network of photonic crystal cavities. The coupled mode theory (CMT) is used to investigate these resonators. A new framework is designed to model networks of resonators and SOAs. The same network topologies are used, but feedback is added to control the internal dynamics of the system. By adjusting the readout weights of the network in a controlled manner, we can generate arbitrary periodic patterns.

  18. Computational Geometry and Computer-Aided Design

    NASA Technical Reports Server (NTRS)

    Fay, T. H. (Compiler); Shoosmith, J. N. (Compiler)

    1985-01-01

    Extended abstracts of papers addressing the analysis, representation, and synthesis of shape information are presented. Curves and shape control, grid generation and contouring, solid modelling, surfaces, and curve intersection are specifically addressed.

  19. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    PubMed

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  20. Asymptotic solutions for the relaxation of the contact line in the Wilhelmy-plate geometry: The contact line dissipation approach.

    PubMed

    Iliev, Stanimir; Pesheva, Nina; Iliev, Dimitar

    2010-01-01

    The relaxation of straight contact lines is considered in the context of the Wilhelmy-plate experiment: a homogeneous solid plate is moving vertically at constant velocity in a tank of liquid in the partial wetting regime. We apply the contact line dissipation approach to describe the quasistatic relaxation of the contact line toward the stationary state (below the entrainment transition). Asymptotic solutions are derived from the differential equations describing the capillary rise height and the contact angle relaxation for small initial deviations of the height from the final stationary value in the model considering the friction dissipation at the moving contact line, in the model considering the viscous flow dissipation in the wedge, and in the combined model taking into account both channels of dissipation. We find that for all models the time relaxation of the height and the cosine of the contact angle are given by sums of exponential functions up to a second order in the expansion of the small parameter. We analyze the implications which follow when only one dissipation channel is taken into account and compare them to the case when both dissipation channels are included. The asymptotic solutions are compared with experimental results and with numerically obtained solutions which are based on hydrodynamic approach in lubrication approximation with and without a correction factor for finite contact angles. The best description of the experimental data, based on multicriteria testing, is obtained with the combined contact line dissipation model which takes into account both channels of dissipation. PMID:20365384

  1. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    DOE PAGESBeta

    Berman, G. P.; Kamenev, D. I.; Tsifrinovich, V. I.

    2003-01-01

    A dynmore » amics of a nuclear-spin quantum computer with a large number ( L = 1000 ) of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.« less

  2. Optimization of weld bead geometry in laser welding with filler wire process using Taguchi’s approach

    NASA Astrophysics Data System (ADS)

    dongxia, Yang; xiaoyan, Li; dingyong, He; zuoren, Nie; hui, Huang

    2012-10-01

    In the present work, laser welding with filler wire was successfully applied to joining a new-type Al-Mg alloy. Welding parameters of laser power, welding speed and wire feed rate were carefully selected with the objective of producing a weld joint with the minimum weld bead width and the fusion zone area. Taguchi approach was used as a statistical design of experimental technique for optimizing the selected welding parameters. From the experimental results, it is found that the effect of welding parameters on the welding quality decreased in the order of welding speed, wire feed rate, and laser power. The optimal combination of welding parameters is the laser power of 2.4 kW, welding speed of 3 m/min and the wire feed rate of 2 m/min. Verification experiments have also been conducted to validate the optimized parameters.

  3. Interactions between pool geometry and hydraulics

    USGS Publications Warehouse

    Thompson, D.M.; Nelson, J.M.; Wohl, E.E.

    1998-01-01

    An experimental and computational research approach was used to determine interactions between pool geometry and hydraulics. A 20-m-long, 1.8-m-wide flume was used to investigate the effect of four different geometric aspects of pool shape on flow velocity. Plywood sections were used to systematically alter constriction width, pool depth, pool length, and pool exit-slope gradient, each at two separate levels. Using the resulting 16 unique geometries with measured pool velocities in four-way factorial analyses produced an empirical assessment of the role of the four geometric aspects on the pool flow patterns and hence the stability of the pool. To complement the conclusions of these analyses, a two-dimensional computational flow model was used to investigate the relationships between pool geometry and flow patterns over a wider range of conditions. Both experimental and computational results show that constriction and depth effects dominate in the jet section of the pool and that pool length exhibits an increasing effect within the recirculating-eddy system. The pool exit slope appears to force flow reattachment. Pool length controls recirculating-eddy length and vena contracta strength. In turn, the vena contracta and recirculating eddy control velocities throughout the pool.

  4. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    PubMed Central

    Herd, Seth A.; Krueger, Kai A.; Kriete, Trenton E.; Huang, Tsung-Ren; Hazy, Thomas E.; O'Reilly, Randall C.

    2013-01-01

    We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area. PMID:23935605

  5. Computer-aided liver surgery planning: an augmented reality approach

    NASA Astrophysics Data System (ADS)

    Bornik, Alexander; Beichel, Reinhard; Reitinger, Bernhard; Gotschuli, Georg; Sorantin, Erich; Leberl, Franz W.; Sonka, Milan

    2003-05-01

    Surgical resection of liver tumors requires a detailed three-dimensional understanding of a complex arrangement of vasculature, liver segments and tumors inside the liver. In most cases, surgeons need to develop this understanding by looking at sequences of axial images from modalities like X-ray computed tomography. A system for liver surgery planning is reported that enables physicians to visualize and refine segmented input liver data sets, as well as to simulate and evaluate different resections plans. The system supports surgeons in finding the optimal treatment strategy for each patient and eases the data preparation process. The use of augmented reality contributes to a user-friendly design and simplifies complex interaction with 3D objects. The main function blocks developed so far are: basic augmented reality environment, user interface, rendering, surface reconstruction from segmented volume data sets, surface manipulation and quantitative measurement toolkit. The flexible design allows to add functionality via plug-ins. First practical evaluation steps have shown a good acceptance. Evaluation of the system is ongoing and future feedback from surgeons will be collected and used for design refinements.

  6. Expert systems - New approaches to computer-aided engineering

    NASA Astrophysics Data System (ADS)

    Dym, C. L.

    This paper provides an overview of the burgeoning new field of expert (knowledge-based) systems. This survey is tutorial in nature, intended to convey the gestalt of such systems to engineers who are newly exposed to the field. The discussion includes definitions, basic concepts, expert system architecture, descriptions of some of the programming tools and environments with which knowledge-based systems can be built, and approaches to knowledge acquisition. Some currently extant expert systems are described en passant, including a few developed for engineering purposes. Comments follow on the engineering of knowledge, as both cultural and social processes. The paper closes with an assessment of the roles that expert systems can play in engineering analysis, design, planning, and education.

  7. Diabetes of the brain: computational approaches and interventional strategies.

    PubMed

    Narasimhan, Kothandaraman; Govindasamy, Meenakumari; Gauthaman, Kalamegam; Kamal, Mohammad A; Abuzenadeh, Adel M; Al-Qahtani, Mohammed; Kanagasabai, Rajaraman

    2014-04-01

    Diabetes mellitus (DM) is characterized by hyperglycemia either due to deficient insulin production (Type 1 Diabetes mellitus) or peripheral insulin resistance of the cells (Type 2 Diabetes mellitus). Both Type 1 Diabetes mellitus and Type 2 Diabetes mellitus are more prevalent and efforts are directed to actively control these metabolic syndromes. Currently, Alzheimer's disease (AD), is gaining popularity as 'Type 3 diabetes' or 'Diabetes of the brain' and it is now evident that this neurodegenerative disease has multiple shared pathology with DM. Alarming is the fact that the incidence of AD might double within the next two decades, and this is certain to cause devastating effects not only to the afflicted individual or the family, but also to the global economy. Methods to either delay the onset or inhibit the progression of AD are therefore necessary. Progressive dementia, increased deposition of amyloid- β protein, neurofibrillary tangles and neuritic plaques in the brain are some of the hallmarks of AD. More understanding of the disease at the cellular and molecular level will enable identifying the possible targets for intervention and pave way for either development of novel or modification of the existing therapeutic options. In this work we have performed semantic data mining analysis on a large collection of most recently published data and identified an updated list of common genes expressed in DM and AD. Functional analysis of these genes revealed both existing and missing links involved in a bigger network associated with both disease conditions. Thus we argue that computational analysis methods help not only in understanding the mechanistic links but also in narrowing down precise targets (genes, proteins, metabolites and signalling pathways) and provide the base for both disease intervention and development of therapeutic options.

  8. A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae

    PubMed Central

    Chu, Daniel B.; Burgess, Sean M.

    2016-01-01

    Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203

  9. A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae.

    PubMed

    Chu, Daniel B; Burgess, Sean M

    2016-03-01

    Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203

  10. Systematic Approach to Computational Design of Gene Regulatory Networks with Information Processing Capabilities.

    PubMed

    Moskon, Miha; Mraz, Miha

    2014-01-01

    We present several measures that can be used in de novo computational design of biological systems with information processing capabilities. Their main purpose is to objectively evaluate the behavior and identify the biological information processing structures with the best dynamical properties. They can be used to define constraints that allow one to simplify the design of more complex biological systems. These measures can be applied to existent computational design approaches in synthetic biology, i.e., rational and automatic design approaches. We demonstrate their use on a) the computational models of several basic information processing structures implemented with gene regulatory networks and b) on a modular design of a synchronous toggle switch.

  11. Analyses of Physcomitrella patens Ankyrin Repeat Proteins by Computational Approach

    PubMed Central

    Mahmood, Niaz; Tamanna, Nahid

    2016-01-01

    Ankyrin (ANK) repeat containing proteins are evolutionary conserved and have functions in crucial cellular processes like cell cycle regulation and signal transduction. In this study, through an entirely in silico approach using the first release of the moss genome annotation, we found that at least 54 ANK proteins are present in P. patens. Based on their differential domain composition, the identified ANK proteins were classified into nine subfamilies. Comparative analysis of the different subfamilies of ANK proteins revealed that P. patens contains almost all the known subgroups of ANK proteins found in the other angiosperm species except for the ones having the TPR domain. Phylogenetic analysis using full length protein sequences supported the subfamily classification where the members of the same subfamily almost always clustered together. Synonymous divergence (dS) and nonsynonymous divergence (dN) ratios showed positive selection for the ANK genes of P. patens which probably helped them to attain significant functional diversity during the course of evolution. Taken together, the data provided here can provide useful insights for future functional studies of the proteins from this superfamily as well as comparative studies of ANK proteins. PMID:27429806

  12. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers. PMID:19258199

  13. An integrative computational approach for prioritization of genomic variants

    DOE PAGESBeta

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Meydan, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; et al

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidatemore » genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.« less

  14. An integrative computational approach for prioritization of genomic variants

    SciTech Connect

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Meydan, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad; Huang, Qingyang

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.

  15. An integrative computational approach for prioritization of genomic variants.

    PubMed

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Cem, Meydan; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R; Mirzaa, Ghayda M; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E; Ross, M Elizabeth; Maltsev, Natalia; Gilliam, T Conrad

    2014-01-01

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest. PMID:25506935

  16. An integrative computational approach for prioritization of genomic variants.

    PubMed

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Cem, Meydan; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R; Mirzaa, Ghayda M; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E; Ross, M Elizabeth; Maltsev, Natalia; Gilliam, T Conrad

    2014-01-01

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.

  17. Effects of artificial gravity on the cardiovascular system: Computational approach

    NASA Astrophysics Data System (ADS)

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.

  18. Computer-Tailored Interventions Motivating People To Adopt Health Promoting Behaviours: Introduction to a New Approach.

    ERIC Educational Resources Information Center

    de Vries, Hein; Brug, Johannes

    1999-01-01

    Introduces computerized tailoring and provides a brief overview of its elements and applications as well as of the papers of this issue. The articles describe the application of behavioral change theories to this new approach, advantages of computer-tailored intervention over more general mass media approaches, their effectiveness and future…

  19. A Knowledge Engineering Approach to Developing Educational Computer Games for Improving Students' Differentiating Knowledge

    ERIC Educational Resources Information Center

    Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen

    2013-01-01

    Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…

  20. An efficient computational approach for evaluating radiation flux for laser driven inertial confinement fusion targets

    NASA Astrophysics Data System (ADS)

    Li, Haiyan; Huang, Yunbao; Jiang, Shaoen; Jing, Longfei; Ding, Yongkun

    2015-08-01

    Radiation flux computation on the target is very important for laser driven Inertial Confinement Fusion, and view-factor based equation models (MacFarlane, 2003; Srivastava et al., 2000) are often used to compute this radiation flux on the capsule or samples inside the hohlraum. However, the equation models do not lead to sparse matrices and may involve an intensive solution process when discrete mesh elements become smaller and the number of equations increases. An efficient approach for the computation of radiation flux is proposed in this paper, in which, (1) symmetric and positive definite properties are achieved by transformation, and (2) an efficient Cholesky factorization algorithm is applied to significantly accelerate such equations models solving process. Finally, two targets on a laser facility built in China are considered to validate the computing efficiency of present approach. The results show that the radiation flux computation can be accelerated by a factor of 2.

  1. A new computer approach to mixed feature classification for forestry application

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1976-01-01

    A computer approach for mapping mixed forest features (i.e., types, classes) from computer classification maps is discussed. Mixed features such as mixed softwood/hardwood stands are treated as admixtures of softwood and hardwood areas. Large-area mixed features are identified and small-area features neglected when the nominal size of a mixed feature can be specified. The computer program merges small isolated areas into surrounding areas by the iterative manipulation of the postprocessing algorithm that eliminates small connected sets. For a forestry application, computer-classified LANDSAT multispectral scanner data of the Sam Houston National Forest were used to demonstrate the proposed approach. The technique was successful in cleaning the salt-and-pepper appearance of multiclass classification maps and in mapping admixtures of softwood areas and hardwood areas. However, the computer-mapped mixed areas matched very poorly with the ground truth because of inadequate resolution and inappropriate definition of mixed features.

  2. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    SciTech Connect

    Fillippi, Anthony; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  3. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    SciTech Connect

    Filippi, Anthony M; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  4. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    PubMed

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches.

  5. Computational Approach for Ranking Mutant Enzymes According to Catalytic Reaction Rates

    PubMed Central

    Kumarasiri, Malika; Baker, Gregory A.; Soudackov, Alexander V.

    2009-01-01

    A computationally efficient approach for ranking mutant enzymes according to the catalytic reaction rates is presented. This procedure requires the generation and equilibration of the mutant structures, followed by the calculation of partial free energy curves using an empirical valence bond potential in conjunction with biased molecular dynamics simulations and umbrella integration. The individual steps are automated and optimized for computational efficiency. This approach is used to rank a series of 15 dihydrofolate reductase mutants according to the hydride transfer reaction rate. The agreement between the calculated and experimental changes in the free energy barrier upon mutation is encouraging. The computational approach predicts the correct direction of the change in free energy barrier for all mutants, and the correlation coefficient between the calculated and experimental data is 0.82. This general approach for ranking protein designs has implications for protein engineering and drug design. PMID:19235997

  6. Teleportation-based quantum computation, extended Temperley-Lieb diagrammatical approach and Yang-Baxter equation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Zhang, Kun; Pang, Jinglong

    2016-01-01

    This paper focuses on the study of topological features in teleportation-based quantum computation and aims at presenting a detailed review on teleportation-based quantum computation (Gottesman and Chuang in Nature 402: 390, 1999). In the extended Temperley-Lieb diagrammatical approach, we clearly show that such topological features bring about the fault-tolerant construction of both universal quantum gates and four-partite entangled states more intuitive and simpler. Furthermore, we describe the Yang-Baxter gate by its extended Temperley-Lieb configuration and then study teleportation-based quantum circuit models using the Yang-Baxter gate. Moreover, we discuss the relationship between the extended Temperley-Lieb diagrammatical approach and the Yang-Baxter gate approach. With these research results, we propose a worthwhile subject, the extended Temperley-Lieb diagrammatical approach, for physicists in quantum information and quantum computation.

  7. Use of Integrated Computational Approaches in the Search for New Therapeutic Agents.

    PubMed

    Persico, Marco; Di Dato, Antonio; Orteca, Nausicaa; Cimino, Paola; Novellino, Ettore; Fattorusso, Caterina

    2016-09-01

    Computer-aided drug discovery plays a strategic role in the development of new potential therapeutic agents. Nevertheless, the modeling of biological systems still represents a challenge for computational chemists and at present a single computational method able to face such challenge is not available. This prompted us, as computational medicinal chemists, to develop in-house methodologies by mixing various bioinformatics and computational tools. Importantly, thanks to multi-disciplinary collaborations, our computational studies were integrated and validated by experimental data in an iterative process. In this review, we describe some recent applications of such integrated approaches and how they were successfully applied in i) the search of new allosteric inhibitors of protein-protein interactions and ii) the development of new redox-active antimalarials from natural leads. PMID:27546035

  8. Use of Integrated Computational Approaches in the Search for New Therapeutic Agents.

    PubMed

    Persico, Marco; Di Dato, Antonio; Orteca, Nausicaa; Cimino, Paola; Novellino, Ettore; Fattorusso, Caterina

    2016-09-01

    Computer-aided drug discovery plays a strategic role in the development of new potential therapeutic agents. Nevertheless, the modeling of biological systems still represents a challenge for computational chemists and at present a single computational method able to face such challenge is not available. This prompted us, as computational medicinal chemists, to develop in-house methodologies by mixing various bioinformatics and computational tools. Importantly, thanks to multi-disciplinary collaborations, our computational studies were integrated and validated by experimental data in an iterative process. In this review, we describe some recent applications of such integrated approaches and how they were successfully applied in i) the search of new allosteric inhibitors of protein-protein interactions and ii) the development of new redox-active antimalarials from natural leads.

  9. CATIA-GDML geometry builder

    NASA Astrophysics Data System (ADS)

    Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Semennikov, A.

    2011-12-01

    Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. An original set of tools has been developed for building a GEANT4/ROOT compatible geometry in the CATIA CAD system and exchanging it with mentioned MC packages using GDML file format. A Special structure of a CATIA product tree, a wide range of primitives, different types of multiple volume instantiation, and supporting macros have been implemented.

  10. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  11. Non-invasive computation of aortic pressure maps: a phantom-based study of two approaches

    NASA Astrophysics Data System (ADS)

    Delles, Michael; Schalck, Sebastian; Chassein, Yves; Müller, Tobias; Rengier, Fabian; Speidel, Stefanie; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2014-03-01

    Patient-specific blood pressure values in the human aorta are an important parameter in the management of cardiovascular diseases. A direct measurement of these values is only possible by invasive catheterization at a limited number of measurement sites. To overcome these drawbacks, two non-invasive approaches of computing patient-specific relative aortic blood pressure maps throughout the entire aortic vessel volume are investigated by our group. The first approach uses computations from complete time-resolved, three-dimensional flow velocity fields acquired by phasecontrast magnetic resonance imaging (PC-MRI), whereas the second approach relies on computational fluid dynamics (CFD) simulations with ultrasound-based boundary conditions. A detailed evaluation of these computational methods under realistic conditions is necessary in order to investigate their overall robustness and accuracy as well as their sensitivity to certain algorithmic parameters. We present a comparative study of the two blood pressure computation methods in an experimental phantom setup, which mimics a simplified thoracic aorta. The comparative analysis includes the investigation of the impact of algorithmic parameters on the MRI-based blood pressure computation and the impact of extracting pressure maps in a voxel grid from the CFD simulations. Overall, a very good agreement between the results of the two computational approaches can be observed despite the fact that both methods used completely separate measurements as input data. Therefore, the comparative study of the presented work indicates that both non-invasive pressure computation methods show an excellent robustness and accuracy and can therefore be used for research purposes in the management of cardiovascular diseases.

  12. A uniform algebraically-based approach to computational physics and efficient programming

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2007-03-01

    We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.

  13. A computational approach to detect and segment cytoplasm in muscle fiber images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Yang, Zhong; Wang, Yaming; Xia, Shunren

    2015-06-01

    We developed a computational approach to detect and segment cytoplasm in microscopic images of skeletal muscle fibers. The computational approach provides computer-aided analysis of cytoplasm objects in muscle fiber images to facilitate biomedical research. Cytoplasm in muscle fibers plays an important role in maintaining the functioning and health of muscular tissues. Therefore, cytoplasm is often used as a marker in broad applications of musculoskeletal research, including our search on treatment of muscular disorders such as Duchenne muscular dystrophy, a disease that has no available treatment. However, it is often challenging to analyze cytoplasm and quantify it given the large number of images typically generated in experiments and the large number of muscle fibers contained in each image. Manual analysis is not only time consuming but also prone to human errors. In this work we developed a computational approach to detect and segment the longitudinal sections of cytoplasm based on a modified graph cuts technique and iterative splitting method to extract cytoplasm objects from the background. First, cytoplasm objects are extracted from the background using the modified graph cuts technique which is designed to optimize an energy function. Second, an iterative splitting method is designed to separate the touching or adjacent cytoplasm objects from the results of graph cuts. We tested the computational approach on real data from in vitro experiments and found that it can achieve satisfactory performance in terms of precision and recall rates.

  14. A radial basis function network approach for the computation of inverse continuous time variant functions.

    PubMed

    Mayorga, René V; Carrera, Jonathan

    2007-06-01

    This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.

  15. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  16. Anharmonic-potential-effective-charge approach for computing Raman cross sections of a gas

    NASA Astrophysics Data System (ADS)

    Kutteh, Ramzi; van Zandt, L. L.

    1993-05-01

    An anharmonic-potential-effective-charge approach for computing relative Raman intensities of a gas is developed. The equations of motion are set up and solved for the driven anharmonic molecular vibrations. An explicit expression for the differential polarizability tensor is derived and its properties discussed. This expression is then used within the context of Placzek's theory [Handbuch der Radiologie (Akademische Verlagsgesellschaft, Leipzig, 1934), Vol. VI] to compute the Raman cross section and depolarization ratio of a gas. The computation is carried out for the small molecules CO2, CS2, SO2, and CCl4; results are compared with experimental measurements and discussed.

  17. Algorithmic approaches for computing elementary modes in large biochemical reaction networks.

    PubMed

    Klamt, S; Gagneur, J; von Kamp, A

    2005-12-01

    The concept of elementary (flux) modes provides a rigorous description of pathways in metabolic networks and proved to be valuable in a number of applications. However, the computation of elementary modes is a hard computational task that gave rise to several variants of algorithms during the last years. This work brings substantial progresses to this issue. The authors start with a brief review of results obtained from previous work regarding (a) a unified framework for elementary-mode computation, (b) network compression and redundancy removal and (c) the binary approach by which elementary modes are determined as binary patterns reducing the memory demand drastically without loss of speed. Then the authors will address herein further issues. First, a new way to perform the elementarity tests required during the computation of elementary modes which empirically improves significantly the computation time in large networks is proposed. Second, a method to compute only those elementary modes where certain reactions are involved is derived. Relying on this method, a promising approach for computing EMs in a completely distributed manner by decomposing the full problem in arbitrarity many sub-tasks is presented. The new methods have been implemented in the freely available software tools FluxAnalyzer and Metatool and benchmark tests in realistic networks emphasise the potential of our proposed algorithms.

  18. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... Assumption Buster Workshop: ``Current Implementations of Cloud Computing Indicate a New Approach to Security... computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of...

  19. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    NASA Astrophysics Data System (ADS)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  20. Integral geometry and holography

    SciTech Connect

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; Sully, James

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulk curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.

  1. Integral geometry and holography

    DOE PAGESBeta

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; Sully, James

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulkmore » curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.« less

  2. Computer-aided fit testing: an approach for examining the user/equipment interface

    NASA Astrophysics Data System (ADS)

    Corner, Brian D.; Beecher, Robert M.; Paquette, Steven

    1997-03-01

    Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.

  3. Quantum Consequences of Parameterizing Geometry

    NASA Astrophysics Data System (ADS)

    Wanas, M. I.

    2002-12-01

    The marriage between geometrization and quantization is not successful, so far. It is well known that quantization of gravity , using known quantization schemes, is not satisfactory. It may be of interest to look for another approach to this problem. Recently, it is shown that geometries with torsion admit quantum paths. Such geometries should be parameterizied in order to preserve the quantum properties appeared in the paths. The present work explores the consequences of parameterizing such geometry. It is shown that quantum properties, appeared in the path equations, are transferred to other geometric entities.

  4. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    NASA Astrophysics Data System (ADS)

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  5. A dual-energy approach for improvement of the measurement consistency in computed tomography

    NASA Astrophysics Data System (ADS)

    Jansson, Anton; Pejryd, Lars

    2016-11-01

    Computed tomography is increasingly adopted by industries for metrological and material evaluation. The technology enables new measurement possibilities, while also challenging old measurement methods in their established territories. There are, however, uncertainties related with the computed tomography method. Investigation of multi-material components with, in particular, varying material thickness can result in unreliable measurements. In this paper the effects of multi-materials, and differing material thickness, on computed tomography measurement consistency has been studied. The aim of the study was to identify measurement inconsistencies and attempt to correct these with a dual-energy computed tomography approach. In this pursuit, a multi-material phantom was developed, containing reliable measurement points and custom-ability with regards to material combinations. A dual-energy method was developed and implemented using sequential acquisition and pre-reconstruction fusing of projections. It was found that measurements made on the multi-material phantom with a single computed tomography scan were highly inconsistent. It was also found that the dual-energy approach was able to reduce the measurement inconsistencies. However, more work is required with the automation of the dual-energy approach presented in this paper since it is highly operator dependant.

  6. Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach

    ERIC Educational Resources Information Center

    Khoumsi, Ahmed; Hadjou, Brahim

    2005-01-01

    Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…

  7. An Overview of Three Approaches to Scoring Written Essays by Computer.

    ERIC Educational Resources Information Center

    Rudner, Lawrence; Gagne, Phill

    2001-01-01

    Describes the three most promising approaches to essay scoring by computer: (1) Project Essay Grade (PEG; E. Page, 1966); (2) Intelligent Essay Assessor (IEA; T. Landauer, 1997); and (3) E-rater (J. Burstein, Educational Testing Service). All of these proprietary systems return grades that correlate meaningfully with those of human raters. (SLD)

  8. An Overview of Three Approaches to Scoring Written Essays by Computer. ERIC Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence; Gagne, Phill

    This digest describes the three most prominent approaches to essay scoring by computer: (1) Project Essay Grade (PEG), introduced by E. Page in 1966; (2) Intelligent Essay Assessor (IEA), introduced for essay grading in 1997 by T. Landauer and P. Foltz; and (3) e-rater, used by the Educational Testing Service and developed by J. Burstein. PEG…

  9. EMERGING MOLECULAR AND COMPUTATIONAL APPROACHES FOR CROSS-SPECIES EXTRAPLATIONS: A WORKSHOP SUMMARY REPORT

    EPA Science Inventory

    Benson, W.H., R.T. Di Giulio, J.C. Cook, J. Freedman, R.L. Malek, C. Thompson and D. Versteeg. In press. Emerging Molecular and Computational Approaches for Cross-Species Extrapolations: A Workshop Summary Report (Abstract). To be presented at the SETAC Fourth World Congress, 14-...

  10. A Computer-Based Spatial Learning Strategy Approach That Improves Reading Comprehension and Writing

    ERIC Educational Resources Information Center

    Ponce, Hector R.; Mayer, Richard E.; Lopez, Mario J.

    2013-01-01

    This article explores the effectiveness of a computer-based spatial learning strategy approach for improving reading comprehension and writing. In reading comprehension, students received scaffolded practice in translating passages into graphic organizers. In writing, students received scaffolded practice in planning to write by filling in graphic…

  11. The Computational Experiment and Its Effects on Approach to Learning and Beliefs on Physics

    ERIC Educational Resources Information Center

    Psycharis, Sarantos

    2011-01-01

    Contemporary instructional approaches expect students to be active producers of knowledge. This leads to the need for creation of instructional tools and tasks that can offer students opportunities for active learning. This study examines the effect of a computational experiment as an instructional tool-for Grade 12 students, using a computer…

  12. The Criterion-Related Validity of a Computer-Based Approach for Scoring Concept Maps

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Koul, Ravinder; Salehi, Roya

    2006-01-01

    This investigation seeks to confirm a computer-based approach that can be used to score concept maps (Poindexter & Clariana, 2004) and then describes the concurrent criterion-related validity of these scores. Participants enrolled in two graduate courses (n=24) were asked to read about and research online the structure and function of the heart…

  13. Integrated geometry and grid generation system for complex configurations

    NASA Technical Reports Server (NTRS)

    Akdag, Vedat; Wulf, Armin

    1992-01-01

    A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.

  14. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  15. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  16. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  17. A new approach to modeling of selected human respiratory system diseases, directed to computer simulations.

    PubMed

    Redlarski, Grzegorz; Jaworski, Jacek

    2013-10-01

    This paper presents a new versatile approach to model severe human respiratory diseases via computer simulation. The proposed approach enables one to predict the time histories of various diseases via information accessible in medical publications. This knowledge is useful to bioengineers involved in the design and construction of medical devices that are employed for monitoring of respiratory condition. The approach provides the data that are crucial for testing diagnostic systems. This can be achieved without the necessity of probing the physiological details of the respiratory system as well as without identification of parameters that are based on measurement data.

  18. Current Trend Towards Using Soft Computing Approaches to Phase Synchronization in Communication Systems

    NASA Technical Reports Server (NTRS)

    Drake, Jeffrey T.; Prasad, Nadipuram R.

    1999-01-01

    This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.

  19. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources. PMID:20350850

  20. A concurrent hybrid Navier-Stokes/Euler approach to fluid dynamic computations

    NASA Technical Reports Server (NTRS)

    Tavella, Domingo A.; Djomehri, M. J.; Kislitzin, Katherine T.; Blake, Matthew W.; Erickson, Larry L.

    1993-01-01

    We present a methodology for the numerical simulation of flow fields by the simultaneous application of two distinct approaches to computational aerodynamics. We compute the three dimensional flow field of a missile at moderate angle of attack by dividing the flow field into two regions: a region near the surface where we use a structured grid and a Navier Stokes solver, and a region farther away from the surface where we utilize an unstructured grid and an Euler solver. The two solvers execute as independent UNIX processes either on the same machine or on two machines. The solvers communicate data across their common interfaces within the same machine or over the network. The computations indicate that extensively separated flow fields can be computed without significant distortion by combining viscous and inviscid solvers.

  1. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  2. Digital approach to planning computer-guided surgery and immediate provisionalization in a partially edentulous patient.

    PubMed

    Arunyanak, Sirikarn P; Harris, Bryan T; Grant, Gerald T; Morton, Dean; Lin, Wei-Shao

    2016-07-01

    This report describes a digital approach for computer-guided surgery and immediate provisionalization in a partially edentulous patient. With diagnostic data obtained from cone-beam computed tomography and intraoral digital diagnostic scans, a digital pathway of virtual diagnostic waxing, a virtual prosthetically driven surgical plan, a computer-aided design and computer-aided manufacturing (CAD/CAM) surgical template, and implant-supported screw-retained interim restorations were realized with various open-architecture CAD/CAM systems. The optional CAD/CAM diagnostic casts with planned implant placement were also additively manufactured to facilitate preoperative inspection of the surgical template and customization of the CAD/CAM-fabricated interim restorations. PMID:26868961

  3. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids

    PubMed Central

    Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260

  4. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids.

    PubMed

    Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260

  5. Bayesian approaches to spatial inference: Modelling and computational challenges and solutions

    NASA Astrophysics Data System (ADS)

    Moores, Matthew; Mengersen, Kerrie

    2014-12-01

    We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.

  6. An analytical approach to photonic reservoir computing - a network of SOA's - for noisy speech recognition

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Abiri, Ebrahim; Dehyadegari, Louiza

    2013-10-01

    This paper seeks to investigate an approach of photonic reservoir computing for optical speech recognition on an examination isolated digit recognition task. An analytical approach in photonic reservoir computing is further drawn on to decrease time consumption, compared to numerical methods; which is very important in processing large signals such as speech recognition. It is also observed that adjusting reservoir parameters along with a good nonlinear mapping of the input signal into the reservoir, analytical approach, would boost recognition accuracy performance. Perfect recognition accuracy (i.e. 100%) can be achieved for noiseless speech signals. For noisy signals with 0-10 db of signal to noise ratios, however, the accuracy ranges observed varied between 92% and 98%. In fact, photonic reservoir application demonstrated 9-18% improvement compared to classical reservoir networks with hyperbolic tangent nodes.

  7. Viscous-inviscid interaction computations using a pseudo Navier-Stokes approach

    NASA Technical Reports Server (NTRS)

    Whitfield, D. L.

    1985-01-01

    A new method is presented for the computation of viscous-inviscid interaction. The idea is to treat rotational inviscid flow (of which flows are almost entirely composed) in a thorough manner, and accept an approximation treatment of vorticity as introduced by viscous effects. The approach is to numerically solve the Navier-Stokes equations with the viscous terms determined from an inverse boundary-layer solution. The method falls somewhere between a Navier-Stokes approach and an Euler and boundary-layer equation coupling approach; consequently, it is referred to as a pseudo Navier-Stokes approach. Results from both the Navier-Stokes equations and the pseudo Navier-Stokes approach are presented.

  8. Non-Euclidean geometry of twisted filament bundle packing

    PubMed Central

    Bruss, Isaac R.; Grason, Gregory M.

    2012-01-01

    Densely packed and twisted assemblies of filaments are crucial structural motifs in macroscopic materials (cables, ropes, and textiles) as well as synthetic and biological nanomaterials (fibrous proteins). We study the unique and nontrivial packing geometry of this universal material design from two perspectives. First, we show that the problem of twisted bundle packing can be mapped exactly onto the problem of disc packing on a curved surface, the geometry of which has a positive, spherical curvature close to the center of rotation and approaches the intrinsically flat geometry of a cylinder far from the bundle center. From this mapping, we find the packing of any twisted bundle is geometrically frustrated, as it makes the sixfold geometry of filament close packing impossible at the core of the fiber. This geometrical equivalence leads to a spectrum of close-packed fiber geometries, whose low symmetry (five-, four-, three-, and twofold) reflect non-Euclidean packing constraints at the bundle core. Second, we explore the ground-state structure of twisted filament assemblies formed under the influence of adhesive interactions by a computational model. Here, we find that the underlying non-Euclidean geometry of twisted fiber packing disrupts the regular lattice packing of filaments above a critical radius, proportional to the helical pitch. Above this critical radius, the ground-state packing includes the presence of between one and six excess fivefold disclinations in the cross-sectional order. PMID:22711799

  9. MRI-Based Computational Fluid Dynamics in Experimental Vascular Models: Toward the Development of an Approach for Prediction of Cardiovascular Changes During Prolonged Space Missions

    NASA Technical Reports Server (NTRS)

    Spirka, T. A.; Myers, J. G.; Setser, R. M.; Halliburton, S. S.; White, R. D.; Chatzimavroudis, G. P.

    2005-01-01

    A priority of NASA is to identify and study possible risks to astronauts health during prolonged space missions [l]. The goal is to develop a procedure for a preflight evaluation of the cardiovascular system of an astronaut and to forecast how it will be affected during the mission. To predict these changes, a computational cardiovascular model must be constructed. Although physiology data can be used to make a general model, a more desirable subject-specific model requires anatomical, functional, and flow data from the specific astronaut. MRI has the unique advantage of providing images with all of the above information, including three-directional velocity data which can be used as boundary conditions in a computational fluid dynamics (CFD) program [2,3]. MRI-based CFD is very promising for reproduction of the flow patterns of a specific subject and prediction of changes in the absence of gravity. The aim of this study was to test the feasibility of this approach by reconstructing the geometry of MRI-scanned arterial models and reproducing the MRI-measured velocities using CFD simulations on these geometries.

  10. Toward a computational approach for collision avoidance with real-world scenes

    NASA Astrophysics Data System (ADS)

    Keil, Matthias S.; Rodriguez-Vazquez, Angel

    2003-04-01

    In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. In order to timely initiate escape behavior, these neurons must recognize a possible approach (or at least differentiate it from similar but non-threatening situations), and estimate the time-to-collision (ttc). Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, should thus be promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. Unfortunately, a corresponding computational architecture which is able to handle real-situations (e.g. moving backgrounds, different lighting conditions) is still not available (successful collision avoidance of a robot was demonstrated only for a closed environment). Here we present two computational models for signalling impending collision. These models are parsimonious since they possess only the minimum number of computational units which are essential to reproduce corresponding biological data. Our models show robust performance in adverse situations, such as with approaching low-contrast objects, or with highly textured backgrounds. Furthermore, a condition is proposed under which the responses of our models match the so-called eta-function. We finally discuss which components need to be added to our model to convert it into a full-fledged real-world-environment collision detector.

  11. Spectral geometry of symplectic spinors

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri

    2015-10-01

    Symplectic spinors form an infinite-rank vector bundle. Dirac operators on this bundle were constructed recently by Habermann, K. ["The Dirac operator on symplectic spinors," Ann. Global Anal. Geom. 13, 155-168 (1995)]. Here we study the spectral geometry aspects of these operators. In particular, we define the associated distance function and compute the heat trace asymptotics.

  12. Computational Complementation: A Modelling Approach to Study Signalling Mechanisms during Legume Autoregulation of Nodulation

    PubMed Central

    Han, Liqi

    2010-01-01

    Autoregulation of nodulation (AON) is a long-distance signalling regulatory system maintaining the balance of symbiotic nodulation in legume plants. However, the intricacy of internal signalling and absence of flux and biochemical data, are a bottleneck for investigation of AON. To address this, a new computational modelling approach called “Computational Complementation” has been developed. The main idea is to use functional-structural modelling to complement the deficiency of an empirical model of a loss-of-function (non-AON) mutant with hypothetical AON mechanisms. If computational complementation demonstrates a phenotype similar to the wild-type plant, the signalling hypothesis would be suggested as “reasonable”. Our initial case for application of this approach was to test whether or not wild-type soybean cotyledons provide the shoot-derived inhibitor (SDI) to regulate nodule progression. We predicted by computational complementation that the cotyledon is part of the shoot in terms of AON and that it produces the SDI signal, a result that was confirmed by reciprocal epicotyl-and-hypocotyl grafting in a real-plant experiment. This application demonstrates the feasibility of computational complementation and shows its usefulness for applications where real-plant experimentation is either difficult or impossible. PMID:20195551

  13. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    PubMed

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327

  14. Approaches for the computationally efficient assessment of the plug-in HEV impact on the grid

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Kyung; Filipi, Zoran S.

    2012-11-01

    Realistic duty cycles are critical for design and assessment of hybrid propulsion systems, in particular, plug-in hybrid electric vehicles. The analysis of the PHEV impact requires a large amount of data about daily missions for ensuring realism in predicted temporal loads on the grid. This paper presents two approaches for the reduction of the computational effort while assessing the large scale PHEV impact on the grid, namely 1) "response surface modelling" approach; and 2) "daily driving schedule modelling" approach. The response surface modelling approach replaces the time-consuming vehicle simulations by response surfaces constructed off-line with the consideration of the real-world driving. The daily driving modelling approach establishes a correlation between departure and arrival times, and it predicts representative driving patterns with a significantly reduced number of simulation cases. In both cases, representative synthetic driving cycles are used to capture the naturalistic driving characteristics for a given trip length. The proposed approaches enable construction of 24-hour missions, assessments of charging requirements at the time of plugging-in, and temporal distributions of the load on the grid with high computational efficiency.

  15. A Dynamic Bayesian Network Approach to Location Prediction in Ubiquitous Computing Environments

    NASA Astrophysics Data System (ADS)

    Lee, Sunyoung; Lee, Kun Chang; Cho, Heeryon

    The ability to predict the future contexts of users significantly improves service quality and user satisfaction in ubiquitous computing environments. Location prediction is particularly useful because ubiquitous computing environments can dynamically adapt their behaviors according to a user's future location. In this paper, we present an inductive approach to recognizing a user's location by establishing a dynamic Bayesian network model. The dynamic Bayesian network model has been evaluated with a set of contextual data collected from undergraduate students. The evaluation result suggests that a dynamic Bayesian network model offers significant predictive power.

  16. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  17. Elastic wave field computation in multilayered nonplanar solid structures: a mesh-free semianalytical approach.

    PubMed

    Banerjee, Sourav; Kundu, Tribikram

    2008-03-01

    Multilayered solid structures made of isotropic, transversely isotropic, or general anisotropic materials are frequently used in aerospace, mechanical, and civil structures. Ultrasonic fields developed in such structures by finite size transducers simulating actual experiments in laboratories or in the field have not been rigorously studied. Several attempts to compute the ultrasonic field inside solid media have been made based on approximate paraxial methods like the classical ray tracing and multi-Gaussian beam models. These approximate methods have several limitations. A new semianalytical method is adopted in this article to model elastic wave field in multilayered solid structures with planar or nonplanar interfaces generated by finite size transducers. A general formulation good for both isotropic and anisotropic solids is presented in this article. A variety of conditions have been incorporated in the formulation including irregularities at the interfaces. The method presented here requires frequency domain displacement and stress Green's functions. Due to the presence of different materials in the problem geometry various elastodynamic Green's functions for different materials are used in the formulation. Expressions of displacement and stress Green's functions for isotropic and anisotropic solids as well as for the fluid media are presented. Computed results are verified by checking the stress and displacement continuity conditions across the interface of two different solids of a bimetal plate and investigating if the results for a corrugated plate with very small corrugation match with the flat plate results.

  18. Managing search complexity in linguistic geometry.

    PubMed

    Stilman, B

    1997-01-01

    This paper is a new step in the development of linguistic geometry. This formal theory is intended to discover and generalize the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper, we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing linguistic geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in the paper on two pilot examples of the solution of complex optimization problems. The first example is a problem of strategic planning for the air combat, in which concurrent actions of four vehicles are simulated as serial interleaving moves. The second example is a problem of strategic planning for the space comb of eight autonomous vehicles (with interleaving moves) that requires generation of the search tree of the depth 25 with the branching factor 30. This is beyond the capabilities of modern and conceivable future computers (employing conventional approaches). In both examples the linguistic geometry tools showed deep and highly selective searches in comparison with conventional search algorithms. For the first example a sketch of the proof of optimality of the solution is considered. PMID:18263105

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  20. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.