Science.gov

Sample records for computational geometry approach

  1. Nozzle Geometry Optimization of an MPD Thruster by Soft-Computing Approaches

    NASA Astrophysics Data System (ADS)

    Nakane, Masakatsu; Konno, Tomokazu; Iketani, Gen; Ishikawa, Yoshio; Funaki, Ikkoh; Toki, Kyoichiro

    The optimization of flowfield of a self-field magnetoplasmadynamic (MPD) thruster was conducted by two soft-computing methods. Both the genetic algorithm (GA) and a new method to obtain a minimum/maximum based on the path integral were used to establish the optimum geometry that produces the highest thrust for specified operating conditions within a quasi-one-dimensional framework. The optimum geometry was found to be a quickly convergent and divergent geometry regardless of the method employed, and the optimized geometry was found to be the same that was obtained by the classical variational methods. This fact as well as their applicability to parallel computers assures that the soft computing methods are suitable for a more complicated multi-dimensional flowfield optimization problem.

  2. A Computational Geometry Approach to Automated Pulmonary Fissure Segmentation in CT Examinations

    PubMed Central

    Pu, Jiantao; Leader, Joseph K; Zheng, Bin; Knollmann, Friedrich; Fuhrman, Carl; Sciurba, Frank C; Gur, David

    2010-01-01

    Identification of pulmonary fissures, which form the boundaries between the lobes in the lungs, may be useful during clinical interpretation of CT examinations to assess the early presence and characterization of manifestation of several lung diseases. Motivated by the unique nature of the surface shape of pulmonary fissures in three-dimensional space, we developed a new automated scheme using computational geometry methods to detect and segment fissures depicted on CT images. After a geometric modeling of the lung volume using the Marching Cube Algorithm, Laplacian smoothing is applied iteratively to enhance pulmonary fissures by depressing non-fissure structures while smoothing the surfaces of lung fissures. Next, an Extended Gaussian Image based procedure is used to locate the fissures in a statistical manner that approximates the fissures using a set of plane “patches.” This approach has several advantages such as independence of anatomic knowledge of the lung structure except the surface shape of fissures, limited sensitivity to other lung structures, and ease of implementation. The scheme performance was evaluated by two experienced thoracic radiologists using a set of 100 images (slices) randomly selected from 10 screening CT examinations. In this preliminary evaluation 98.7% and 94.9% of scheme segmented fissure voxels are within 2 mm of the fissures marked independently by two radiologists in the testing image dataset. Using the scheme detected fissures as reference, 89.4% and 90.1% of manually marked fissure points have distance ≤ 2 mm to the reference suggesting a possible under-segmentation of the scheme. The case-based RMS (root-mean-square) distances (“errors”) between our scheme and the radiologist ranged from 1.48±0.92 to 2.04±3.88 mm. The discrepancy of fissure detection results between the automated scheme and either radiologist is smaller in this dataset than the inter-reader variability. PMID:19272987

  3. A computational approach to continuum damping of Alfven waves in two and three-dimensional geometry

    SciTech Connect

    Koenies, Axel; Kleiber, Ralf

    2012-12-15

    While the usual way of calculating continuum damping of global Alfven modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  4. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  5. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  6. Computer Environments for Learning Geometry.

    ERIC Educational Resources Information Center

    Clements, Douglas H.; Battista, Michael T.

    1994-01-01

    Reviews research describing computer functions of construction-oriented computer environments and evaluates their contributions to students' learning of geometry. Topics discussed include constructing geometric concepts; the use of LOGO in elementary school mathematics; software that focuses on geometric construction; and implications for the…

  7. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  8. A Whirlwind Tour of Computational Geometry.

    ERIC Educational Resources Information Center

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  9. A computational approach to continuum damping of Alfvén waves in two and three-dimensional geometry

    NASA Astrophysics Data System (ADS)

    Könies, Axel; Kleiber, Ralf

    2012-12-01

    While the usual way of calculating continuum damping of global Alfvén modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  10. A cell-centered Lagrangian finite volume approach for computing elasto-plastic response of solids in cylindrical axisymmetric geometries

    NASA Astrophysics Data System (ADS)

    Sambasivan, Shiv Kumar; Shashkov, Mikhail J.; Burton, Donald E.

    2013-03-01

    A finite volume cell-centered Lagrangian formulation is presented for solving large deformation problems in cylindrical axisymmetric geometries. Since solid materials can sustain significant shear deformation, evolution equations for stress and strain fields are solved in addition to mass, momentum and energy conservation laws. The total strain-rate realized in the material is split into an elastic and plastic response. The elastic and plastic components in turn are modeled using hypo-elastic theory. In accordance with the hypo-elastic model, a predictor-corrector algorithm is employed for evolving the deviatoric component of the stress tensor. A trial elastic deviatoric stress state is obtained by integrating a rate equation, cast in the form of an objective (Jaumann) derivative, based on Hooke's law. The dilatational response of the material is modeled using an equation of state of the Mie-Grüneisen form. The plastic deformation is accounted for via an iterative radial return algorithm constructed from the J2 von Mises yield condition. Several benchmark example problems with non-linear strain hardening and thermal softening yield models are presented. Extensive comparisons with representative Eulerian and Lagrangian hydrocodes in addition to analytical and experimental results are made to validate the current approach.

  11. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  12. Quadric solids and computational geometry

    SciTech Connect

    Emery, J.D.

    1980-07-25

    As part of the CAD-CAM development project, this report discusses the mathematics underlying the program QUADRIC, which does computations on objects modeled as Boolean combinations of quadric half-spaces. Topics considered include projective space, quadric surfaces, polars, affine transformations, the construction of solids, shaded image, the inertia tensor, moments, volume, surface integrals, Monte Carlo integration, and stratified sampling. 1 figure.

  13. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  14. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  15. Geometry of Quantum Computation with Qudits

    PubMed Central

    Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710

  16. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  17. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  18. Computational field simulation of temporally deforming geometries

    SciTech Connect

    Boyalakuntla, K.; Soni, B.K.; Thornburg, H.J.

    1996-12-31

    A NURBS based moving grid generation technique is presented to simulate temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such a grid for every time step. The Non Uniform Rational B Spline (NURBS) based control point information is used for geometry description. The parametric definition of the NURBS is utilized in the development of the methodology to generate well distributed grid in a timely manner. The numerical simulation involving temporally deforming geometry is accomplished by appropriately linking to a unsteady, multi-block, thin layer Navier-Stokes solver. The present method greatly reduces CPU requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. This current effort is the first step towards multidisciplinary design optimization, which involves coupling aerodynamic heat transfer and structural analysis. Applications include simulation of temporally deforming bodies.

  19. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Durmus, Soner; Karakirik, Erol

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any computer software…

  20. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Karakirik, Erol; Durmus, Soner

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any compute software…

  1. Teaching Geometry: An Experiential and Artistic Approach.

    ERIC Educational Resources Information Center

    Ogletree, Earl J.

    The view that geometry should be taught at every grade level is promoted. Primary and elementary school children are thought to rarely have any direct experience with geometry, except on an incidental basis. Children are supposed to be able to learn geometry rather easily, so long as the method and content are adapted to their development and…

  2. Heuristic Approach to the Schwarzschild Geometry

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    In this article I present a simple Newtonian heuristic for motivating a weak-field approximation for the spacetime geometry of a point particle. The heuristic is based on Newtonian gravity, the notion of local inertial frames (the Einstein equivalence principle), plus the use of Galilean coordinate transformations to connect the freely falling local inertial frames back to the "fixed stars." Because of the heuristic and quasi-Newtonian manner in which the specific choice of spacetime geometry is motivated, we are at best justified in expecting it to be a weak-field approximation to the true spacetime geometry. However, in the case of a spherically symmetric point mass the result is coincidentally an exact solution of the full vacuum Einstein field equations — it is the Schwarzschild geometry in Painlevé-Gullstrand coordinates. This result is much stronger than the well-known result of Michell and Laplace whereby a Newtonian argument correctly estimates the value of the Schwarzschild radius — using the heuristic presented in this article one obtains the entire Schwarzschild geometry. The heuristic also gives sensible results — a Riemann flat geometry — when applied to a constant gravitational field. Furthermore, a subtle extension of the heuristic correctly reproduces the Reissner-Nordström geometry and even the de Sitter geometry. Unfortunately the heuristic construction is not truly generic. For instance, it is incapable of generating the Kerr geometry or anti-de Sitter space. Despite this limitation, the heuristic does have useful pedagogical value in that it provides a simple and direct plausibility argument (not a derivation) for the Schwarzschild geometry — suitable for classroom use in situations where the full power and technical machinery of general relativity might be inappropriate. The extended heuristic provides more challenging problems — suitable for use at the graduate level.

  3. Computers in geology, geometry, planning, training, communications

    SciTech Connect

    Anderson, D.

    1996-07-01

    The Pittsburgh Research Center (PRC), in partnership with Carnegie Mellon University, is developing a mapping system to obtain two- and three-dimensional geometric information about the mine environment and produce two-dimensional (2-D) mine maps and three-dimensional (3-D) geometric models of mine areas. It is also developing mine visualization systems. In its current stage of development, the sensor and computer hardware can gather 2-D and 3-D data as it is transported about the mine environment. The driver transporting the sensor through the mine need only stop for seconds during each scan, to allow the gimballed mount of the sensor to settle, thereby eliminating errors due to sensor tilt. Post-processing software can compile the geometrical data and construct an extensive 2-D map. The final map is similar to mine maps generated by hand measurements today. However, the resolution and rate at which data are acquired result in maps that are more representative of the true shape of coal pillars, intersections, and corners. The 3-D geometric models are of sufficient resolution to support accurate volumetric calculations, navigation information, and assessment of structural size and conditions.

  4. Grid generation and inviscid flow computation about aircraft geometries

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1989-01-01

    Grid generation and Euler flow about fighter aircraft are described. A fighter aircraft geometry is specified by an area ruled fuselage with an internal duct, cranked delta wing or strake/wing combinations, canard and/or horizontal tail surfaces, and vertical tail surfaces. The initial step before grid generation and flow computation is the determination of a suitable grid topology. The external grid topology that has been applied is called a dual-block topology which is a patched C (exp 1) continuous multiple-block system where inner blocks cover the highly-swept part of a cranked wing or strake, rearward inner-part of the wing, and tail components. Outer-blocks cover the remainder of the fuselage, outer-part of the wing, canards and extend to the far field boundaries. The grid generation is based on transfinite interpolation with Lagrangian blending functions. This procedure has been applied to the Langley experimental fighter configuration and a modified F-18 configuration. Supersonic flow between Mach 1.3 and 2.5 and angles of attack between 0 degrees and 10 degrees have been computed with associated Euler solvers based on the finite-volume approach. When coupling geometric details such as boundary layer diverter regions, duct regions with inlets and outlets, or slots with the general external grid, imposing C (exp 1) continuity can be extremely tedious. The approach taken here is to patch blocks together at common interfaces where there is no grid continuity, but enforce conservation in the finite-volume solution. The key to this technique is how to obtain the information required for a conservative interface. The Ramshaw technique which automates the computation of proportional areas of two overlapping grids on a planar surface and is suitable for coding was used. Researchers generated internal duct grids for the Langley experimental fighter configuration independent of the external grid topology, with a conservative interface at the inlet and outlet.

  5. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  6. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID

  7. Computer Aided Design and Descriptive Geometry for School Children.

    ERIC Educational Resources Information Center

    Edmonds, Geoff

    1984-01-01

    Discusses the place of descriptive geometry in schools problems of teaching technical graphics to young students. Instructional materials for teaching technical graphics, graphic programs, program development, and the value of school project work are considered. Sample computer-generated drawings are included. (JN)

  8. Using Computer-Assisted Multiple Representations in Learning Geometry Proofs

    ERIC Educational Resources Information Center

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Hsi-Hsun; Cheng, Ying-Hao

    2011-01-01

    Geometry theorem proving involves skills that are difficult to learn. Instead of working with abstract and complicated representations, students might start with concrete, graphical representations. A proof tree is a graphical representation of a formal proof, with each node representing a proposition or given conditions. A computer-assisted…

  9. Investigating the geometry of pig airways using computed tomography

    NASA Astrophysics Data System (ADS)

    Mansy, Hansen A.; Azad, Md Khurshidul; McMurray, Brandon; Henry, Brian; Royston, Thomas J.; Sandler, Richard H.

    2015-03-01

    Numerical modeling of sound propagation in the airways requires accurate knowledge of the airway geometry. These models are often validated using human and animal experiments. While many studies documented the geometric details of the human airways, information about the geometry of pig airways is scarcer. In addition, the morphology of animal airways can be significantly different from that of humans. The objective of this study is to measure the airway diameter, length and bifurcation angles in domestic pigs using computed tomography. After imaging the lungs of 3 pigs, segmentation software tools were used to extract the geometry of the airway lumen. The airway dimensions were then measured from the resulting 3 D models for the first 10 airway generations. Results showed that the size and morphology of the airways of different animals were similar. The measured airway dimensions were compared with those of the human airways. While the trachea diameter was found to be comparable to the adult human, the diameter, length and branching angles of other airways were noticeably different from that of humans. For example, pigs consistently had an early airway branching from the trachea that feeds the superior (top) right lung lobe proximal to the carina. This branch is absent in the human airways. These results suggested that the human geometry may not be a good approximation of the pig airways and may contribute to increasing the errors when the human airway geometric values are used in computational models of the pig chest.

  10. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  11. Deterministic point inclusion methods for computational applications with complex geometry

    SciTech Connect

    Khamayseh, Ahmed; Kuprat, Andrew P.

    2008-11-21

    A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches.We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid ‘unclean’ or ‘singular’ intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.

  12. DETERMINISTIC POINT INCLUSION METHODS FOR COMPUTATIONAL APPLICATIONS WITH COMPLEX GEOMETRY.

    SciTech Connect

    Khamayseh, Ahmed K; Kuprat, Andrew

    2008-01-01

    A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches. We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid "unclean" or "singular" intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.

  13. Parallel algorithms for computational geometry utilizing a fixed number of processors

    SciTech Connect

    Strader, R.G.

    1988-01-01

    The design of algorithms for systems where both communication and computation are important is presented. Approaches to parallel computation and the underlying theoretical models are surveyed. two models of computation are developed, both based on a divide-and-conquer strategy. The first utilizes a tree-like merge resulting in several levels of communication and computation, the total number determined by the number of processors. The second model contains a fixed number of levels independent of the number of processors. Using the notation from the survey and the models of computation, algorithms are designed for the computational geometry problems of finding the convex hull and Delaunay triangulation for a set of uniform random points in the Euclidean plane. Communication and computation timing measurements based on these algorithms are presented and analyzed. The results are then generalized to predict the behavior of expanded problems. Architectural support, partitioning issues, and limitations of this approach are discussed.

  14. Computer aided design and analysis of gear tooth geometry

    NASA Technical Reports Server (NTRS)

    Chang, S. H.; Huston, R. L.

    1987-01-01

    A simulation method for gear hobbing and shaping of straight and spiral bevel gears is presented. The method is based upon an enveloping theory for gear tooth profile generation. The procedure is applicable in the computer aided design of standard and nonstandard tooth forms. An inverse procedure for finding a conjugate gear tooth profile is presented for arbitrary cutter geometry. The kinematic relations for the tooth surfaces of straight and spiral bevel gears are proposed. The tooth surface equations for these gears are formulated in a manner suitable for their automated numerical development and solution.

  15. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    SciTech Connect

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  16. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  17. Representing Range Compensators with Computational Geometry in TOPAS

    SciTech Connect

    Iandola, Forrest N.; /Illinois U., Urbana /SLAC

    2012-09-07

    In a proton therapy beamline, the range compensator modulates the beam energy, which subsequently controls the depth at which protons deposit energy. In this paper, we introduce two computational representations of range compensator. One of our compensator representations, which we refer to as a subtraction solid-based range compensator, precisely represents the compensator. Our other representation, the 3D hexagon-based range compensator, closely approximates the compensator geometry. We have implemented both of these compensator models in a proton therapy Monte Carlo simulation called TOPAS (Tool for Particle Simulation). In the future, we will present a detailed study of the accuracy and runtime performance trade-offs between our two range compensator representations.

  18. Computational structure analysis of biomacromolecule complexes by interface geometry.

    PubMed

    Mahdavi, Sedigheh; Salehzadeh-Yazdi, Ali; Mohades, Ali; Masoudi-Nejad, Ali

    2013-12-01

    The ability to analyze and compare protein-nucleic acid and protein-protein interaction interface has critical importance in understanding the biological function and essential processes occurring in the cells. Since high-resolution three-dimensional (3D) structures of biomacromolecule complexes are available, computational characterizing of the interface geometry become an important research topic in the field of molecular biology. In this study, the interfaces of a set of 180 protein-nucleic acid and protein-protein complexes are computed to understand the principles of their interactions. The weighted Voronoi diagram of the atoms and the Alpha complex has provided an accurate description of the interface atoms. Our method is implemented in the presence and absence of water molecules. A comparison among the three types of interaction interfaces show that RNA-protein complexes have the largest size of an interface. The results show a high correlation coefficient between our method and the PISA server in the presence and absence of water molecules in the Voronoi model and the traditional model based on solvent accessibility and the high validation parameters in comparison to the classical model. PMID:23850846

  19. SU-E-I-12: Flexible Geometry Computed Tomography

    SciTech Connect

    Shaw, R

    2015-06-15

    Purpose: The concept separates the mechanical connection between the radiation source and detector. This design allows the trajectory and orientation of the radiation source/detector to be customized to the object that is being imaged. This is in contrast to the formulaic rotation-translation image acquisition of conventional computed tomography(CT).Background/significance:CT devices that image a full range of: anatomy, patient populations, and imaging procedures are large. The root cause of the expanding size of comprehensive CT is due to the commitment to helical geometry that is hardwired into the image reconstruction. FGCT extends the application of alternative reconstruction techniques, i.e. tomosynthesis, by separating the two main components— radiation source and detector— and allow for 6 degrees of freedom motion for radiation source, detector, or both. The image acquisition geometry is then tailored to how the patient/object is positioned. This provides greater flexibility on the position and location that the patient/object is being imaged. Additionally, removing the need of a rotating gantry reduces the footprint so that CT is more mobile and more available to move to where the patient/object is at, instead of the other way around. Methods: As proof-of-principle, a reconstruction algorithm is designed to produce FGCT images. Using simulated detector data, voxels intersecting a line drawn between the radiation source and an individual detector are traced and modified using the detector signal. The detector signal is modified to compensate for changes in the source to detector distance. Adjacent voxels are modified in proportion to the detector signal, providing a simple image filter. Results: Image-quality from the proposed FGCT reconstruction technique is proving to be a challenge, producing hardily recognizable images from limited projections angles. Conclusion: Preliminary assessment of the reconstruction technique demonstrates the inevitable

  20. SOLVING PDES IN COMPLEX GEOMETRIES: A DIFFUSE DOMAIN APPROACH

    PubMed Central

    LI, X.; LOWENGRUB, J.; RÄTZ, A.; VOIGT, A.

    2011-01-01

    We extend previous work and present a general approach for solving partial differential equations in complex, stationary, or moving geometries with Dirichlet, Neumann, and Robin boundary conditions. Using an implicit representation of the geometry through an auxilliary phase field function, which replaces the sharp boundary of the domain with a diffuse layer (e.g. diffuse domain), the equation is reformulated on a larger regular domain. The resulting partial differential equation is of the same order as the original equation, with additional lower order terms to approximate the boundary conditions. The reformulated equation can be solved by standard numerical techniques. We use the method of matched asymptotic expansions to show that solutions of the re-formulated equations converge to those of the original equations. We provide numerical simulations which confirm this analysis. We also present applications of the method to growing domains and complex three-dimensional structures and we discuss applications to cell biology and heteroepitaxy. PMID:21603084

  1. Effect of Microstructural Geometry for Computing Closure Models in Multiscale Modeling of Shocked Particle Laden Flow

    NASA Astrophysics Data System (ADS)

    Sen, Oishik; Udaykumar, H. S.; Jacobs, Gustaaf

    Interaction of a shock wave with dust particles is a complex physical phenomenon. A computational model for studying this two-phase system is the Particle-Source in Cell (PSIC) approach. In this method, the dust particles are tracked as point particles in a Lagrangian frame of reference immersed in a compressible fluid. Two-way interaction between the carrier and the dispersed phases is ensured by coupling the momentum and energy transfer between the two phases as source terms in the respective governing equations. These source terms (e.g. drag force on particles) may be computed from resolved numerical simulations by treating each macroscopic point particle as an ensemble of cylinders immersed in a compressed fluid. However the drag so computed must be independent of the geometry of the mesoscale. In this work, the effect of the stochasticity of the microstructural geometry in construction of drag laws from resolved mesoscale computations is studied. Several different arrangement of cylinders are considered and the mean drag law as a function of Mach Number and Volume Fraction for each arrangement is computed using the Dynamic Kriging Method. The uncertainty in the drag forces arising because of the arrangement of the cylinders for a given volume fraction is quantified as 90% credible sets and the effect of the uncertainty on PSIC computations is studied.

  2. A Vector Approach to Euclidean Geometry: Vector Spaces and Affine Geometry, Volume 1. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the first year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Volume 1 deals largely with affine geometry, and the notion of dimension is…

  3. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  4. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) techniques were applied to the Launch Abort System (LAS) of the NASA Crew Exploration Vehicle (CEV) parametric geometry Computational Fluid Dynamics (CFD) study to efficiently identify and rank the primary contributors to the integrated drag over the vehicles ascent trajectory. Typical approaches to these types of activities involve developing all possible combinations of geometries changing one variable at a time, analyzing them with CFD, and predicting the main effects on an aerodynamic parameter, which in this application is integrated drag. The original plan for the LAS study team was to generate and analyze more than1000 geometry configurations to study 7 geometric parameters. By utilizing DOE techniques the number of geometries was strategically reduced to 84. In addition, critical information on interaction effects among the geometric factors were identified that would not have been possible with the traditional technique. Therefore, the study was performed in less time and provided more information on the geometric main effects and interactions impacting drag generated by the LAS. This paper discusses the methods utilized to develop the experimental design, execution, and data analysis.

  5. Scattering Computations of Snow Aggregates from Simple Geometry Models

    NASA Astrophysics Data System (ADS)

    Liao, L.; Meneghini, R.; Nowell, H.; Liu, G.

    2012-12-01

    Accurately characterizing electromagnetic scattering from snow aggregates is one of the essential components in the development of algorithms for the GPM DPR and GMI. Recently several realistic aggregate models have been developed by using randomized procedures. Using pristine ice crystal habits found in nature as the basic elements of which the aggregates are made, more complex randomly aggregated structures can be formed to replicate snowflakes. For these particles, a numerical scheme is needed to compute the scattered fields. These computations, however, are usually time consuming, and are often limited to a certain range of particle sizes and to a few frequencies. The scattering results at other frequencies and sizes are then obtained by either interpolation or extrapolation from nearby computed points (anchor points). Because of the nonlinear nature of the scattering, particularly in the particle resonance region, this sometimes leads to severe errors if the number of anchor points is not sufficiently large to cover the spectral domain and particle size range. As an alternative to these complex models, the simple geometric models, such as sphere and spheroid, are useful for radar and radiometer applications if their scattering results can be shown to closely approximate those from complex aggregate structures. A great advantage of the simple models is their computational efficiency because of existence of analytical solutions, so that the computations can be easily extended to as many frequencies and particle sizes as desired. In this study, two simple models are tested. One approach is to use a snow mass density that is defined as the ratio of the mass of the snow aggregate to the volume, where the volume is taken to be that of a sphere with a diameter equal to the maximum measured dimension of the aggregate; i.e., the diameter of the circumscribing sphere. Because of the way in which the aggregates are generated, where a size-density relation is used, the

  6. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation

    NASA Astrophysics Data System (ADS)

    Yang, Yidong; Armour, Michael; Kang-Hsin Wang, Ken; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (‘tubular’ geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (‘pancake’ geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry

  7. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.

    PubMed

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively

  8. Compositional control of pore geometry in multivariate metal-organic frameworks: an experimental and computational study.

    PubMed

    Cadman, Laura K; Bristow, Jessica K; Stubbs, Naomi E; Tiana, Davide; Mahon, Mary F; Walsh, Aron; Burrows, Andrew D

    2016-03-14

    A new approach is reported for tailoring the pore geometry in five series of multivariate metal–organic frameworks (MOFs) based on the structure [Zn2(bdc)2(dabco)] (bdc = 1,4-benzenedicarboxylate, dabco = 1,8-diazabicyclooctane), DMOF-1. A doping procedure has been adopted to form series of MOFs containing varying linker ratios. The series under investigation are [Zn2(bdc)(2-x)(bdc-Br)x(dabco)]·nDMF 1 (bdc-Br = 2-bromo-1,4-benzenedicarboxylate), [Zn2(bdc)(2-x)(bdc-I)x(dabco)]·nDMF 2 (bdc-I = 2-iodo-1,4-benzenedicarboxylate), [Zn2(bdc)(2-x)(bdc-NO2)x(dabco)]·nDMF 3 (bdc-NO2 = 2-nitro-1,4-benzenedicarboxylate), [Zn2(bdc)(2-x)(bdc-NH2)x(dabco)]·nDMF 4 (bdc-NH2 = 2-amino-1,4-benzenedicarboxylate) and [Zn2(bdc-Br)(2-x)(bdc-I)x(dabco)]·nDMF 5. Series 1-3 demonstrate a functionality-dependent pore geometry transition from the square, open pores of DMOF-1 to rhomboidal, narrow pores with increasing proportion of the 2-substituted bdc linker, with the rhomboidal-pore MOFs also showing a temperature-dependent phase change. In contrast, all members of series 4 and 5 have uniform pore geometries. In series 4 this is a square pore topology, whilst series 5 exhibits the rhomboidal pore form. Computational analyses reveal that the pore size and shape in systems 1 and 2 is altered through non-covalent interactions between the organic linkers within the framework, and that this can be controlled by the ligand functionality and ratio. This approach affords the potential to tailor pore geometry and shape within MOFs through judicious choice of ligand ratios. PMID:26660286

  9. Geometric calculus: a new computational tool for Riemannian geometry

    SciTech Connect

    Moussiaux, A.; Tombal, P.

    1988-05-01

    We compare geometric calculus applied to Riemannian geometry with Cartan's exterior calculus method. The correspondence between the two methods is clearly established. The results obtained by a package written in an algebraic language and doing general manipulations on multivectors are compared. We see that the geometric calculus is as powerful as exterior calculus.

  10. Ideal spiral bevel gears: A new approach to surface geometry

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Coy, J. J.

    1980-01-01

    The fundamental geometrical characteristics of spiral bevel gear tooth surfaces are discussed. The parametric representation of an ideal spiral bevel tooth is developed based on the elements of involute geometry, differential geometry, and fundamental gearing kinematics. A foundation is provided for the study of nonideal gears and the effects of deviations from ideal geometry on the contact stresses, lubrication, wear, fatigue life, and gearing kinematics.

  11. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  12. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  13. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    ). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the

  14. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  15. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  16. Interactive Geometry in the B.C. (Before Computers) Era

    ERIC Educational Resources Information Center

    Whittaker, Heather; Johnson, Iris DeLoach

    2005-01-01

    A 3-by-5 card is used to represent two or more sets of parallel lines, four right angles, opposite sides congruent and to investigate the Pythagorean theorem, similar triangles, and the tangent ratio before the introduction of computers. Students were asked to draw two parallel lines, cross them with a transversal and label the angles, which…

  17. DMG-α--a computational geometry library for multimolecular systems.

    PubMed

    Szczelina, Robert; Murzyn, Krzysztof

    2014-11-24

    The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers. PMID:25296168

  18. Computer-Generated Geometry Instruction: A Preliminary Study

    ERIC Educational Resources Information Center

    Kang, Helen W.; Zentall, Sydney S.

    2011-01-01

    This study hypothesized that increased intensity of graphic information, presented in computer-generated instruction, could be differentially beneficial for students with hyperactivity and inattention by improving their ability to sustain attention and hold information in-mind. To this purpose, 18 2nd-4th grade students, recruited from general…

  19. Application of Computer Axial Tomography (CAT) to measuring crop canopy geometry. [corn and soybeans

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Vanderbilt, V. C. (Principal Investigator); Kilgore, R. W.

    1981-01-01

    The feasibility of using the principles of computer axial topography (CAT) to quantify the structure of crop canopies was investigated because six variables are needed to describe the position-orientation with time of a small piece of canopy foliage. Several cross sections were cut through the foliage of healthy, green corn and soybean canopies in the dent and full pod development stages, respectively. A photograph of each cross section representing the intersection of a plane with the foliage was enlarged and the air-foliage boundaries delineated by the plane were digitized. A computer program was written and used to reconstruct the cross section of the canopy. The approach used in applying optical computer axial tomography to measuring crop canopy geometry shows promise of being able to provide needed geometric information for input data to canopy reflectance models. The difficulty of using the CAT scanner to measure large canopies of crops like corn is discussed and a solution is proposed involving the measurement of plants one at a time.

  20. Gauss-Green cubature and moment computation over arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Sommariva, Alvise; Vianello, Marco

    2009-09-01

    We have implemented in Matlab a Gauss-like cubature formula over arbitrary bivariate domains with a piecewise regular boundary, which is tracked by splines of maximum degree p (spline curvilinear polygons). The formula is exact for polynomials of degree at most 2n-1 using N~cmn2 nodes, 1<=c<=p, m being the total number of points given on the boundary. It does not need any decomposition of the domain, but relies directly on univariate Gauss-Legendre quadrature via Green's integral formula. Several numerical tests are presented, including computation of standard as well as orthogonal moments over a nonstandard planar region.

  1. MHRDRing Z-Pinches and Related Geometries: Four Decades of Computational Modeling Using Still Unconventional Methods

    SciTech Connect

    Lindemuth, Irvin R.

    2009-01-21

    For approximately four decades, Z-pinches and related geometries have been computationally modeled using unique Alternating Direction Implicit (ADI) numerical methods. Computational results have provided illuminating and often provocative interpretations of experimental results. A number of past and continuing applications are reviewed and discussed.

  2. Potts models with magnetic field: Arithmetic, geometry, and computation

    NASA Astrophysics Data System (ADS)

    Dasu, Shival; Marcolli, Matilde

    2015-11-01

    We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.

  3. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  4. Computational aeroelastic analysis of aircraft wings including geometry nonlinearity

    NASA Astrophysics Data System (ADS)

    Tian, Binyu

    The objective of the present study is to show the ability of solving fluid structural interaction problems more realistically by including the geometric nonlinearity of the structure so that the aeroelastic analysis can be extended into the onset of flutter, or in the post flutter regime. A nonlinear Finite Element Analysis software is developed based on second Piola-Kirchhoff stress and Green-Lagrange strain. The second Piola-Kirchhoff stress and Green-Lagrange strain is a pair of energetically conjugated tensors that can accommodate arbitrary large structural deformations and deflection, to study the flutter phenomenon. Since both of these tensors are objective tensors, i.e., the rigid-body motion has no contribution to their components, the movement of the body, including maneuvers and deformation, can be included. The nonlinear Finite Element Analysis software developed in this study is verified with ANSYS, NASTRAN, ABAQUS, and IDEAS for the linear static, nonlinear static, linear dynamic and nonlinear dynamic structural solutions. To solve the flow problems by Euler/Navier equations, the current nonlinear structural software is then embedded into ENSAERO, which is an aeroelastic analysis software package developed at NASA Ames Research Center. The coupling of the two software, both nonlinear in their own field, is achieved by domain decomposition method first proposed by Guruswamy. A procedure has been set for the aeroelastic analysis process. The aeroelastic analysis results have been obtained for fight wing in the transonic regime for various cases. The influence dynamic pressure on flutter has been checked for a range of Mach number. Even though the current analysis matches the general aeroelastic characteristic, the numerical value not match very well with previous studies and needs farther investigations. The flutter aeroelastic analysis results have also been plotted at several time points. The influences of the deforming wing geometry can be well seen

  5. A Self-Paced Instructional Approach for Engineering Descriptive Geometry.

    ERIC Educational Resources Information Center

    Beck, Eugene J.

    These materials, used in a self-paced course in engineering descriptive geometry for freshman and sophomore college-level students, cover three modules: basic spatial relationships; intersections, parallelism, and perpendicularity; and graphic analysis and development. Each module contains five instructional units. Each unit includes a rationale,…

  6. Geometry of Covariance Matrices and Computation of Median

    NASA Astrophysics Data System (ADS)

    Yang, Le; Arnaudon, Marc; Barbaresco, Frédéric

    2011-03-01

    In this paper, we consider the manifold of covariance matrices of order n parametrized by reflection coefficients which are derived from Levinson's recursion of autoregressive model. The explicit expression of the reparametrization and its inverse are obtained. With the Riemannian metric given by the Hessian of a Kähler potential, we show that the manifold is in fact a Cartan-Hadamard manifold with lower sectional curvature bound -4. The explicit expressions of geodesics are also obtained. After that we introduce the notion of Riemannian median of points lying on a Riemannian manifold and give a simple algorithm to compute it. Finally, some simulation examples are given to illustrate the applications of the median method to radar signal processing.

  7. Software-based geometry operations for 3D computer graphics

    NASA Astrophysics Data System (ADS)

    Sima, Mihai; Iancu, Daniel; Glossner, John; Schulte, Michael; Mamidi, Suman

    2006-02-01

    In order to support a broad dynamic range and a high degree of precision, many of 3D renderings fundamental algorithms have been traditionally performed in floating-point. However, fixed-point data representation is preferable over floating-point representation in graphics applications on embedded devices where performance is of paramount importance, while the dynamic range and precision requirements are limited due to the small display sizes (current PDA's are 640 × 480 (VGA), while cell-phones are even smaller). In this paper we analyze the efficiency of a CORDIC-augmented Sandbridge processor when implementing a vertex processor in software using fixed-point arithmetic. A CORDIC-based solution for vertex processing exhibits a number of advantages over classical Multiply-and-Acumulate solutions. First, since a single primitive is used to describe the computation, the code can easily be vectorized and multithreaded, and thus fits the major Sandbridge architectural features. Second, since a CORDIC iteration consists of only a shift operation followed by an addition, the computation may be deeply pipelined. Initially, we outline the Sandbridge architecture extension which encompasses a CORDIC functional unit and the associated instructions. Then, we consider rigid-body rotation, lighting, exponentiation, vector normalization, and perspective division (which are some of the most important data-intensive 3D graphics kernels) and propose a scheme to implement them on the CORDIC-augmented Sandbridge processor. Preliminary results indicate that the performance improvement within the extended instruction set ranges from 3× to 10× (with the exception of rigid body rotation).

  8. Fuzzy multiple linear regression: A computational approach

    NASA Technical Reports Server (NTRS)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  9. Redundancy approaches in spacecraft computers

    NASA Astrophysics Data System (ADS)

    Schonfeld, Chaim

    Twelve redundancy techniques for spacecraft computers are analyzed. The redundancy schemes include: a single unit; two active units; triple modular redundancy; NMR; a single unit with one and two spares; two units with one, two, and three spares; triple units with one and two spares; and a single unit with a spare per module; the basic properties of these schemes are described. The reliability of each scheme is evaluated as a function of the reliability of a single unit. The redundancy schemes are compared in terms of reliability, the number of failures the system can tolerate, coverage, recovery time, and mean time between failure improvement. The error detection and recovery systems and the random access memory redundancy of the schemes are examined. The data reveal that the single unit with a spare per module is the most effective redundancy approach; a description of the scheme is provided.

  10. The flux-coordinate independent approach applied to X-point geometries

    SciTech Connect

    Hariri, F. Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-08-15

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries.

  11. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. PMID:25615567

  12. Computational geometry for patient-specific reconstruction and meshing of blood vessels from MR and CT angiography.

    PubMed

    Antiga, Luca; Ene-Iordache, Bogdan; Remuzzi, Andrea

    2003-05-01

    Investigation of three-dimensional (3-D) geometry and fluid-dynamics in human arteries is an important issue in vascular disease characterization and assessment. Thanks to recent advances in magnetic resonance (MR) and computed tomography (CT), it is now possible to address the problem of patient-specific modeling of blood vessels, in order to take into account interindividual anatomic variability of vasculature. Generation of models suitable for computational fluid dynamics is still commonly performed by semiautomatic procedures, in general based on operator-dependent tasks, which cannot be easily extended to a significant number of clinical cases. In this paper, we overcome these limitations making use of computational geometry techniques. In particular, 3-D modeling was carried out by means of 3-D level sets approach. Model editing was also implemented ensuring harmonic mean curvature vectors distribution on the surface, and model geometric analysis was performed with a novel approach, based on solving Eikonal equation on Voronoi diagram. This approach provides calculation of central paths, maximum inscribed sphere estimation and geometric characterization of the surface. Generation of adaptive-thickness boundary layer finite elements is finally presented. The use of the techniques presented here makes it possible to introduce patient-specific modeling of blood vessels at clinical level. PMID:12846436

  13. Comparative Effects of Two Modes of Computer-Assisted Instructional Package on Solid Geometry Achievement

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Ezenwa, Victoria Ifeoma; Anyanwu, Romanus Chogozie

    2014-01-01

    The study examined the effects of two modes of computer-assisted instructional package on solid geometry achievement amongst senior secondary school students in Minna, Niger State, Nigeria. Also, the influence of gender on the performance of students exposed to CAI(AT) and CAI(AN) packages were examined. This study adopted a pretest-posttest…

  14. Multivariate geometry as an approach to algal community analysis

    USGS Publications Warehouse

    Allen, T.F.H.; Skagen, S.

    1973-01-01

    Multivariate analyses are put in the context of more usual approaches to phycological investigations. The intuitive common-sense involved in methods of ordination, classification and discrimination are emphasised by simple geometric accounts which avoid jargon and matrix algebra. Warnings are given that artifacts result from technique abuses by the naive or over-enthusiastic. An analysis of a simple periphyton data set is presented as an example of the approach. Suggestions are made as to situations in phycological investigations, where the techniques could be appropriate. The discipline is reprimanded for its neglect of the multivariate approach.

  15. Machine Computation; An Algorithmic Approach.

    ERIC Educational Resources Information Center

    Gonzalez, Richard F.; McMillan, Claude, Jr.

    Designed for undergraduate social science students, this textbook concentrates on using the computer in a straightforward way to manipulate numbers and variables in order to solve problems. The text is problem oriented and assumes that the student has had little prior experience with either a computer or programing languages. An introduction to…

  16. Learning theoretic approach to differential and perceptual geometry: I. Curvature and torsion are the independent components of space curves

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid

    2000-06-01

    In standard differential geometry, the Fundamental Theorem of Space Curves states that two differential invariants of a curve, namely curvature and torsion, determine its geometry, or equivalently, the isometry class of the curve up to rigid motions in the Euclidean three-dimensional space. Consider a physical model of a space curve made from a sufficiently thin, yet visible rigid wire, and the problem of perceptual identification (by a human observer or a robot) of two given physical model curves. In a previous paper (perceptual geometry) we have emphasized a learning theoretic approach to construct a perceptual geometry of the surfaces in the environment. In particular, we have described a computational method for mathematical representation of objects in the perceptual geometry inspired by the ecological theory of Gibson, and adhering to the principles of Gestalt in perceptual organization of vision. In this paper, we continue our learning theoretic treatment of perceptual geometry of objects, focusing on the case of physical models of space curves. In particular, we address the question of perceptually distinguishing two possibly novel space curves based on observer's prior visual experience of physical models of curves in the environment. The Fundamental Theorem of Space Curves inspires an analogous result in perceptual geometry as follows. We apply learning theory to the statistics of a sufficiently rich collection of physical models of curves, to derive two statistically independent local functions, that we call by analogy, the curvature and torsion. This pair of invariants distinguish physical models of curves in the sense of perceptual geometry. That is, in an appropriate resolution, an observer can distinguish two perceptually identical physical models in different locations. If these pairs of functions are approximately the same for two given space curves, then after possibly some changes of viewing planes, the observer confirms the two are the same.

  17. Computation and Visualization of Casimir Forces in Arbitrary Geometries: Nonmonotonic Lateral-Wall Forces and the Failure of Proximity-Force Approximations

    SciTech Connect

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide; Capasso, Federico

    2007-08-24

    We present a method of computing Casimir forces for arbitrary geometries, with any desired accuracy, that can directly exploit the efficiency of standard numerical-electromagnetism techniques. Using the simplest possible finite-difference implementation of this approach, we obtain both agreement with past results for cylinder-plate geometries, and also present results for new geometries. In particular, we examine a pistonlike problem involving two dielectric and metallic squares sliding between two metallic walls, in two and three dimensions, respectively, and demonstrate nonadditive and nonmonotonic changes in the force due to these lateral walls.

  18. Computation and visualization of Casimir forces in arbitrary geometries: nonmonotonic lateral-wall forces and the failure of proximity-force approximations.

    PubMed

    Rodriguez, Alejandro; Ibanescu, Mihai; Iannuzzi, Davide; Capasso, Federico; Joannopoulos, J D; Johnson, Steven G

    2007-08-24

    We present a method of computing Casimir forces for arbitrary geometries, with any desired accuracy, that can directly exploit the efficiency of standard numerical-electromagnetism techniques. Using the simplest possible finite-difference implementation of this approach, we obtain both agreement with past results for cylinder-plate geometries, and also present results for new geometries. In particular, we examine a pistonlike problem involving two dielectric and metallic squares sliding between two metallic walls, in two and three dimensions, respectively, and demonstrate nonadditive and nonmonotonic changes in the force due to these lateral walls. PMID:17930932

  19. Computer Algebra, Instrumentation and the Anthropological Approach

    ERIC Educational Resources Information Center

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  20. Computational approaches to motor control

    PubMed Central

    Flash, Tamar; Sejnowski, Terrence J

    2010-01-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors. PMID:11741014

  1. Comparative study of auxetic geometries by means of computer-aided design and engineering

    NASA Astrophysics Data System (ADS)

    Álvarez Elipe, Juan Carlos; Díaz Lantada, Andrés

    2012-10-01

    Auxetic materials (or metamaterials) are those with a negative Poisson ratio (NPR) and display the unexpected property of lateral expansion when stretched, as well as an equal and opposing densification when compressed. Such geometries are being progressively employed in the development of novel products, especially in the fields of intelligent expandable actuators, shape morphing structures and minimally invasive implantable devices. Although several auxetic and potentially auxetic geometries have been summarized in previous reviews and research, precise information regarding relevant properties for design tasks is not always provided. In this study we present a comparative study of two-dimensional and three-dimensional auxetic geometries carried out by means of computer-aided design and engineering tools (from now on CAD-CAE). The first part of the study is focused on the development of a CAD library of auxetics. Once the library is developed we simulate the behavior of the different auxetic geometries and elaborate a systematic comparison, considering relevant properties of these geometries, such as Poisson ratio(s), maximum volume or area reductions attainable and equivalent Young’s modulus, hoping it may provide useful information for future designs of devices based on these interesting structures.

  2. Computation of Transverse Injection Into Supersonic Crossflow With Various Injector Orifice Geometries

    NASA Technical Reports Server (NTRS)

    Foster, Lancert; Engblom, William A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of various injector geometries employed in transverse injection into a non-reacting Mach 1.2 flow. 3-D Reynolds-Averaged Navier Stokes (RANS) results are obtained for the various injector geometries using the Wind code with the Mentor s Shear Stress Transport turbulence model in both single and multi-species modes. Computed results for the injector mixing, penetration, and induced wall forces are presented. In the case of rectangular injectors, those longer in the direction of the freestream flow are predicted to generate the most mixing and penetration of the injector flow into the primary stream. These injectors are also predicted to provide the largest discharge coefficients and induced wall forces. Minor performance differences are indicated among diamond, circle, and square orifices. Grid sensitivity study results are presented which indicate consistent qualitative trends in the injector performance comparisons with increasing grid fineness.

  3. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  4. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  5. A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2001-01-01

    This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.

  6. Computational approaches for systems metabolomics.

    PubMed

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field. PMID:27135552

  7. Modelling Mathematics Teachers' Intention to Use the Dynamic Geometry Environments in Macau: An SEM Approach

    ERIC Educational Resources Information Center

    Zhou, Mingming; Chan, Kan Kan; Teo, Timothy

    2016-01-01

    Dynamic geometry environments (DGEs) provide computer-based environments to construct and manipulate geometric figures with great ease. Research has shown that DGEs has positive impact on student motivation, engagement, and achievement in mathematics learning. However, the adoption of DGEs by mathematics teachers varies substantially worldwide.…

  8. Quantitative approaches to computational vaccinology.

    PubMed

    Doytchinova, Irini A; Flower, Darren R

    2002-06-01

    This article reviews the newly released JenPep database and two new powerful techniques for T-cell epitope prediction: (i) the additive method; and (ii) a 3D-Quantitative Structure Activity Relationships (3D-QSAR) method, based on Comparative Molecular Similarity Indices Analysis (CoMSIA). The JenPep database is a family of relational databases supporting the growing need of immunoinformaticians for quantitative data on peptide binding to major histocompatibility complexes and to the Transporters associated with Antigen Processing (TAP). It also contains an annotated list of T-cell epitopes. The database is available free via the Internet (http://www.jenner.ac.uk/JenPep). The additive prediction method is based on the assumption that the binding affinity of a peptide depends on the contributions from each amino acid as well as on the interactions between the adjacent and every second side-chain. In the 3D-QSAR approach, the influence of five physicochemical properties (steric bulk, electrostatic potential, local hydrophobicity, hydrogen-bond donor and hydrogen-bond acceptor abilities) on the affinity of peptides binding to MHC molecules were considered. Both methods were exemplified through their application to the well-studied problem of peptides binding to the human class I MHC molecule HLA-A*0201. PMID:12067414

  9. Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations Around Iced Airfoils and Wings

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)

    2002-01-01

    Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.

  10. Computational studies of flow through cross flow fans - effect of blade geometry

    NASA Astrophysics Data System (ADS)

    Govardhan, M.; Sampat, D. Lakshmana

    2005-09-01

    This present paper describes three dimensional computational analysis of complex internal flow in a cross flow fan. A commercial computational fluid dynamics (CFD) software code CFX was used for the computation. RNG k-ɛ two equation turbulence model was used to simulate the model with unstructured mesh. Sliding mesh interface was used at the interface between the rotating and stationary domains to capture the unsteady interactions. An accurate assessment of the present investigation is made by comparing various parameters with the available experimental data. Three impeller geometries with different blade angles and radius ratio are used in the present study. Maximum energy transfer through the impeller takes place in the region where the flow follows the blade curvature. Radial velocity is not uniform through blade channels. Some blades work in turbine mode at very low flow coefficients. Static pressure is always negative in and around the impeller region.

  11. New perspectives for Discrete Element Modeling: Merging Computational Geometry and Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Alonso-Marroquín, Fernando; Galindo-Torres, Sergio-Andres; Tordesillas, Antoinette; Wang, Yucang

    2009-06-01

    One of the most challenging problems in the realistic modeling of granular materials is how to capture the real shape of the particles. Here we present a method to simulate systems with complex-shaped particles. This method integrates developments in two traditionally separate research areas: computational geometry and molecular dynamics. The computational geometry involves the implementation of techniques of computer graphics to represent particle shape and collision detection. Traditional techniques from molecular dynamics are used to integrate the equations of motion and to perform an efficient calculation of contact forces. The algorithm to solve the dynamics of the system is much more efficient, accurate and easier to implement than other models. The algorithm is used to simulate quasistatic deformation of granular materials using two different models. The first model consists of non-circular particles interacting via frictional forces. The second model consists of circular particles interacting via rolling and sliding resistance. The comparison of both models help us to understand and quantify the extend to which the effects of particle shape can be captured by the introduction of artificial rolling resistance on circular particles. Biaxial test simulation show that the overall response of the system and the collapse of force chains at the critical state is qualitatively similar in both 2D and 3D simulations.

  12. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  13. A Comparison of Two Approaches to Teaching Selected Elements of College Level Descriptive Geometry.

    ERIC Educational Resources Information Center

    Beck, Eugene Jerome

    This study was designed to ascertain the relative effectiveness of two approaches for teaching descriptive geometry by a comparison of the following behavioral variables--(1) performance in the solution of graphical problems, (2) spatial perception, (3) abstract reasoning ability, (4) technical information achievement, and (5) attitude toward…

  14. Answering Typical Student Questions in Hyperbolic Geometry: A Transformation Group Approach

    ERIC Educational Resources Information Center

    Reyes, Edgar N.; Gray, Elizabeth D.

    2002-01-01

    It is shown that the bisector of a segment of a geodesic and the bisector of an angle in hyperbolic geometry can be expressed in terms of points which are equidistant from the end points of the segment, and points that are equidistant from the rays of the angle, respectively. An important tool in the approach is that the shortest distance between…

  15. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  16. Computational modelling of variably saturated flow in porous media with complex three-dimensional geometries

    NASA Astrophysics Data System (ADS)

    McBride, D.; Cross, M.; Croft, N.; Bennett, C.; Gebhardt, J.

    2006-03-01

    A computational procedure is presented for solving complex variably saturated flows in porous media, that may easily be implemented into existing conventional finite-volume-based computational fluid dynamics codes, so that their functionality might be geared upon to readily enable the modelling of a complex suite of interacting fluid, thermal and chemical reaction process physics. This procedure has been integrated within a multi-physics finite volume unstructured mesh framework, allowing arbitrarily complex three-dimensional geometries to be modelled. The model is particularly targeted at ore heap-leaching processes, which encounter complex flow problems, such as infiltration into dry soil, drainage, perched water tables and flow through heterogeneous materials, but is equally applicable to any process involving flow through porous media, such as in environmental recovery processes. The computational procedure is based on the mixed form of the classical Richards equation, employing an adaptive transformed mixed algorithm that is numerically robust and significantly reduces compute (or CPU) time. The computational procedure is accurate (compares well with other methods and analytical data), comprehensive (representing any kind of porous flow model), and is computationally efficient. As such, this procedure provides a suitable basis for the implementation of large-scale industrial heap-leach models.

  17. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  18. A Vector Approach to Euclidean Geometry: Inner Product Spaces, Euclidean Geometry and Trigonometry, Volume 2. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…

  19. Assessment and improvement of mapping algorithms for non-matching meshes and geometries in computational FSI

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe

    2016-05-01

    This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.

  20. Computational Issues Associated with Temporally Deforming Geometries Such as Thrust Vectoring Nozzles

    NASA Technical Reports Server (NTRS)

    Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert

    1996-01-01

    During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.

  1. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  2. A computer program for fitting smooth surfaces to an aircraft configuration and other three dimensional geometries

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1975-01-01

    A computer program that uses a three-dimensional geometric technique for fitting a smooth surface to the component parts of an aircraft configuration is presented. The resulting surface equations are useful in performing various kinds of calculations in which a three-dimensional mathematical description is necessary. Programs options may be used to compute information for three-view and orthographic projections of the configuration as well as cross-section plots at any orientation through the configuration. The aircraft geometry input section of the program may be easily replaced with a surface point description in a different form so that the program could be of use for any three-dimensional surface equations.

  3. Computational Approaches to Study Microbes and Microbiomes

    PubMed Central

    Greene, Casey S.; Foster, James A.; Stanton, Bruce A.; Hogan, Deborah A.; Bromberg, Yana

    2016-01-01

    Technological advances are making large-scale measurements of microbial communities commonplace. These newly acquired datasets are allowing researchers to ask and answer questions about the composition of microbial communities, the roles of members in these communities, and how genes and molecular pathways are regulated in individual community members and communities as a whole to effectively respond to diverse and changing environments. In addition to providing a more comprehensive survey of the microbial world, this new information allows for the development of computational approaches to model the processes underlying microbial systems. We anticipate that the field of computational microbiology will continue to grow rapidly in the coming years. In this manuscript we highlight both areas of particular interest in microbiology as well as computational approaches that begin to address these challenges. PMID:26776218

  4. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  5. Predicting the optimal geometry of microneedles and their array for dermal vaccination using a computational model.

    PubMed

    Römgens, Anne M; Bader, Dan L; Bouwstra, Joke A; Oomens, Cees W J

    2016-11-01

    Microneedle arrays have been developed to deliver a range of biomolecules including vaccines into the skin. These microneedles have been designed with a wide range of geometries and arrangements within an array. However, little is known about the effect of the geometry on the potency of the induced immune response. The aim of this study was to develop a computational model to predict the optimal design of the microneedles and their arrangement within an array. The three-dimensional finite element model described the diffusion and kinetics in the skin following antigen delivery with a microneedle array. The results revealed an optimum distance between microneedles based on the number of activated antigen presenting cells, which was assumed to be related to the induced immune response. This optimum depends on the delivered dose. In addition, the microneedle length affects the number of cells that will be involved in either the epidermis or dermis. By contrast, the radius at the base of the microneedle and release rate only minimally influenced the number of cells that were activated. The model revealed the importance of various geometric parameters to enhance the induced immune response. The model can be developed further to determine the optimal design of an array by adjusting its various parameters to a specific situation. PMID:27557398

  6. Three-dimensional analysis of root canal geometry by high-resolution computed tomography.

    PubMed

    Peters, O A; Laib, A; Rüegsegger, P; Barbakow, F

    2000-06-01

    A detailed understanding of the complexity of root canal systems is imperative to ensure successful root canal preparation. The aim of this study was to evaluate the potential and accuracy of a three-dimensional, non-destructive technique for detailing root canal geometry by means of high-resolution tomography. The anatomy of root canals in 12 extracted human maxillary molars was analyzed by means of a micro-computed tomography scanner (microCT, cubic resolution 34 microm). A special mounting device facilitated repeated precise repositioning of the teeth in the microCT. Surface areas and volumes of each canal were calculated by triangulation, and means were determined. Model-independent methods were used to evaluate the canals' diameters and configuration. The calculated and measured volumes and the areas of artificial root canals, produced by the drilling of precision holes into dentin disks, were well-correlated. Semi-automated repositioning of specimens resulted in near-perfect matching (< 1 voxel) when outer canal contours were assessed. Root canal geometry was accurately assessed by this innovative technique; therefore, variables and indices presented may serve as a basis for further analyses of root canal anatomy in experimental endodontology. PMID:10890720

  7. On the noncommutative spin geometry of the standard Podleś sphere and index computations

    NASA Astrophysics Data System (ADS)

    Wagner, Elmar

    2009-07-01

    The purpose of the paper is twofold: First, known results of the noncommutative spin geometry of the standard Podleś sphere are extended by discussing Poincaré duality and orientability. In the discussion of orientability, Hochschild homology is replaced by a twisted version which avoids the dimension drop. The twisted Hochschild cycle representing an orientation is related to the volume form of the distinguished covariant differential calculus. Integration over the volume form defines a twisted cyclic 2-cocycle which computes the q-winding numbers of quantum line bundles. Second, a "twisted" Chern character from equivariant K0-theory to even twisted cyclic homology is introduced which gives rise to a Chern-Connes pairing between equivariant K0-theory and twisted cyclic cohomology. The Chern-Connes pairing between the equivariant K0-group of the standard Podleś sphere and the generators of twisted cyclic cohomology relative to the modular automorphism and its inverse are computed. This includes the pairings with the twisted cyclic 2-cocycle associated to the volume form, and the one corresponding to the "no-dimension drop" case. From explicit index computations, it follows that the pairings with these cocycles give the q-indices of the known equivariant 0-summable Dirac operator on the standard Podleś sphere.

  8. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  9. Computational Representation of Porous Media Features (Porosity, Permeability, Saturation and Physical Heterogeneous Geometry)

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Muñoz-Negrón, D.; Palomar-Pardavé, M.; Escarela-Perez, R.; Cruz-Morales, V.

    In materials science the properties representation in anisotropic materials is a very important topic. Porous media are heterogeneous in nature. Their representation is frequently assumed as a complex problem and difficult to be treated using normal numerical methods. Chaos theory is used to treat problems without established rules in different topics. The present work is focused to show the development of some computational algorithms to simulate the porous media properties such as porosity, permeability and saturation. The procedures involve the employ of a random number generator to assign properties. The result is amorphous media formed using a cellular automaton. This work also includes the development of some amorphous geometry to represent solid walls in empty samples in order to represent tortuosity of a porous media specimen. Finally advantages and disadvantages of the models developed are commented.

  10. A computational geometry framework for the optimisation of atom probe reconstructions.

    PubMed

    Felfer, Peter; Cairney, Julie

    2016-10-01

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a 'master curve' and correct for overall shape deviations in the data. PMID:27449275

  11. Effects of frequency, irradiation geometry and polarisation on computation of SAR in human brain.

    PubMed

    Zhou, Hongmei; Su, Zhentao; Ning, Jing; Wang, Changzhen; Xie, Xiangdong; Qu, Decheng; Wu, Ke; Zhang, Xiaomin; Pan, Jie; Yang, Guoshan

    2014-12-01

    The power absorbed by the human brain has possible implications in the study of the central nervous system-related biological effects of electromagnetic fields. In order to determine the specific absorption rate (SAR) of radio frequency (RF) waves in the human brain, and to investigate the effects of geometry and polarisation on SAR value, the finite-difference time-domain method was applied for the SAR computation. An anatomically realistic model scaled to a height of 1.70 m and a mass of 63 kg was selected, which included 14 million voxels segmented into 39 tissue types. The results suggested that high SAR values were found in the brain, i.e. ∼250 MHz for vertical polarisation and 900-1200 MHz both for vertical and horizontal polarisation, which may be the result of head resonance at these frequencies. PMID:24399107

  12. NASA geometry data exchange specification for computational fluid dynamics (NASA IGES)

    NASA Technical Reports Server (NTRS)

    Blake, Matthew W.; Kerr, Patricia A.; Thorp, Scott A.; Jou, Jin J.

    1994-01-01

    This document specifies a subset of an existing product data exchange specification that is widely used in industry and government. The existing document is called the Initial Graphics Exchange Specification. This document, a subset of IGES, is intended for engineers analyzing product performance using tools such as computational fluid dynamics (CFD) software. This document specifies how to define mathematically and exchange the geometric model of an object. The geometry is represented utilizing nonuniform rational B-splines (NURBS) curves and surfaces. Only surface models are represented; no solid model representation is included. This specification does not include most of the other types of product information available in IGES (e.g., no material properties or surface finish properties) and does not provide all the specific file format details of IGES. The data exchange protocol specified in this document is fully conforming to the American National Standard (ANSI) IGES 5.2.

  13. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  14. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  15. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  16. Fractal geometry-based classification approach for the recognition of lung cancer cells

    NASA Astrophysics Data System (ADS)

    Xia, Deshen; Gao, Wenqing; Li, Hua

    1994-05-01

    This paper describes a new fractal geometry based classification approach for the recognition of lung cancer cells, which is used in the health inspection for lung cancers, because cancer cells grow much faster and more irregularly than normal cells do, the shape of the segmented cancer cells is very irregular and considered as a graph without characteristic length. We use Texture Energy Intensity Rn to do fractal preprocessing to segment the cells from the image and to calculate the fractal dimention value for extracting the fractal features, so that we can get the figure characteristics of different cancer cells and normal cells respectively. Fractal geometry gives us a correct description of cancer-cell shapes. Through this method, a good recognition of Adenoma, Squamous, and small cancer cells can be obtained.

  17. Computational Approaches to Nucleic Acid Origami.

    PubMed

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms. PMID:26348196

  18. An immersed boundary computational model for acoustic scattering problems with complex geometries.

    PubMed

    Sun, Xiaofeng; Jiang, Yongsong; Liang, An; Jing, Xiaodong

    2012-11-01

    An immersed boundary computational model is presented in order to deal with the acoustic scattering problem by complex geometries, in which the wall boundary condition is treated as a direct body force determined by satisfying the non-penetrating boundary condition. Two distinct discretized grids are used to discrete the fluid domain and immersed boundary, respectively. The immersed boundaries are represented by Lagrangian points and the direct body force determined on these points is applied on the neighboring Eulerian points. The coupling between the Lagrangian points and Euler points is linked by a discrete delta function. The linearized Euler equations are spatially discretized with a fourth-order dispersion-relation-preserving scheme and temporal integrated with a low-dissipation and low-dispersion Runge-Kutta scheme. A perfectly matched layer technique is applied to absorb out-going waves and in-going waves in the immersed bodies. Several benchmark problems for computational aeroacoustic solvers are performed to validate the present method. PMID:23145603

  19. Computer-assisted analysis of lower limb geometry: higher intraobserver reliability compared to conventional method.

    PubMed

    Hankemeier, S; Gosling, T; Richter, M; Hufner, T; Hochhausen, C; Krettek, C

    2006-03-01

    Exact radiographic evaluation of lower limb alignment, joint orientation and leg length is crucial for preoperative planning and successful treatment of deformities, fractures and osteoarthritis. Improvement of the accuracy of radiographic measurements is highly desirable. To determine the intraobserver reliability of conventional analysis of lower extremity geometry, 59 long leg radiographs were randomly analyzed 5 times by a single surgeon. The measurements revealed a standard deviation between 0.36 degrees and 1.17 degrees for the angles mLPFA, mLDFA, MPTA, LDTA, JLCA and AMA (nomenclature according to Paley), and 0.94 mm and 0.90 mm for the MAD and leg length, respectively. Computer-assisted analysis with a special software significantly reduced the standard deviation of the mLDFA, MPTA, LDTA, JLCA (each p < 0.001), AMA (p = 0.032) and MAD (p = 0.023) by 0.05-0.36 degrees and 0.14 mm, respectively. Measuring time was reduced by 44% to 6:34 +/- 0:45 min (p < 0.001). Digital calibration by the software revealed an average magnification of conventional long leg radiographs of 4.6 +/- 1.8% (range: 2.7-11.9%). Computer-assisted analysis increases the intraobserver reliability and reduces the time needed for the analysis. Another major benefit is the ease of storage and transfer of digitized images. Due to the varying magnification factors on long leg radiographs, the use of magnification markers for calibration is recommended. PMID:16782643

  20. A Combined Geometric Approach for Computational Fluid Dynamics on Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    1995-01-01

    A combined geometric approach for computational fluid dynamics is presented for the analysis of unsteady flow about mechanisms in which its components are in moderate relative motion. For a CFD analysis, the total dynamics problem involves the dynamics of the aspects of geometry modeling, grid generation, and flow modeling. The interrelationships between these three aspects allow for a more natural formulation of the problem and the sharing of information which can be advantageous to the computation of the dynamics. The approach is applied to planar geometries with the use of an efficient multi-block, structured grid generation method to compute unsteady, two-dimensional and axisymmetric flow. The applications presented include the computation of the unsteady, inviscid flow about a hinged-flap with flap deflections and a high-speed inlet with centerbody motion as part of the unstart / restart operation.

  1. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  2. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  3. Influence of aquifer geometry on karst hydraulics using different distributive modeling approaches

    NASA Astrophysics Data System (ADS)

    Oehlmann, Sandra; Geyer, Tobias; Licha, Tobias; Birk, Steffen

    2013-04-01

    The simulation of flow and transport processes in karst systems is a challenge due to the unknown location of highly conductive karst conduit networks. In this work, the influence of aquifer geometry, particularly the geometry of highly conductive discrete elements, on three-dimensional groundwater flow in a large-scale aquifer system is examined. The area of investigation comprises several springs on the Western Swabian Alb / Germany and has an area of approximately 150 km2. The largest spring therein is the Gallusquelle with an annual average discharge of 0.5 m3/s. Long-term spring hydrographs and hydraulic head measurements, as well as several tracer tests, are available from previous work and are used for model calibration. Four distributive continuum and discrete flow models with different degrees of complexity were set-up employing the finite element simulation software Comsol Multiphysics®. Stationary groundwater flow equations were implemented for single continuum and hybrid modeling. The aquifer geometry was modeled previously with the software Geological Objects Computer Aided Design® (GoCAD®) and transferred to Comsol® software. Simulation results show that not only the location of karst conduits but also their geometry has significant impact on the simulated spring discharge and hydraulic head distribution. A constant conduit radius leads to distorted hydraulic head contour lines and a conduit restrained flow regime close to the spring, while a linearly increasing radius towards the spring leads to evenly distributed contour lines. Models with such an increase in conduit diameters allow the simulation of annual discharge for several springs. This result is in agreement with synthetic karst genesis models, which suggest an increase of conduit diameters towards karst springs because of a positive correlation between flow rates and carbonate solution. The software Comsol Multiphysics®, while rarely used for groundwater flow modeling, was found to meet

  4. High performance parallel computing of flows in complex geometries: II. Applications

    NASA Astrophysics Data System (ADS)

    Gourdain, N.; Gicquel, L.; Staffelbach, G.; Vermorel, O.; Duchaine, F.; Boussuge, J.-F.; Poinsot, T.

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  5. Introducing Computational Approaches in Intermediate Mechanics

    NASA Astrophysics Data System (ADS)

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  6. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  7. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  8. Predicting microbial interactions through computational approaches.

    PubMed

    Li, Chenhao; Lim, Kun Ming Kenneth; Chng, Kern Rei; Nagarajan, Niranjan

    2016-06-01

    Microorganisms play a vital role in various ecosystems and characterizing interactions between them is an essential step towards understanding the organization and function of microbial communities. Computational prediction has recently become a widely used approach to investigate microbial interactions. We provide a thorough review of emerging computational methods organized by the type of data they employ. We highlight three major challenges in inferring interactions using metagenomic survey data and discuss the underlying assumptions and mathematics of interaction inference algorithms. In addition, we review interaction prediction methods relying on metabolic pathways, which are increasingly used to reveal mechanisms of interactions. Furthermore, we also emphasize the importance of mining the scientific literature for microbial interactions - a largely overlooked data source for experimentally validated interactions. PMID:27025964

  9. Sculpting the band gap: a computational approach.

    PubMed

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  10. Sculpting the band gap: a computational approach

    PubMed Central

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  11. Computational modeling approaches in gonadotropin signaling.

    PubMed

    Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne

    2016-07-01

    Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991

  12. Computational approaches for predicting mutant protein stability.

    PubMed

    Kulshreshtha, Shweta; Chaudhary, Vigi; Goswami, Girish K; Mathur, Nidhi

    2016-05-01

    Mutations in the protein affect not only the structure of protein, but also its function and stability. Prediction of mutant protein stability with accuracy is desired for uncovering the molecular aspects of diseases and design of novel proteins. Many advanced computational approaches have been developed over the years, to predict the stability and function of a mutated protein. These approaches based on structure, sequence features and combined features (both structure and sequence features) provide reasonably accurate estimation of the impact of amino acid substitution on stability and function of protein. Recently, consensus tools have been developed by incorporating many tools together, which provide single window results for comparison purpose. In this review, a useful guide for the selection of tools that can be employed in predicting mutated proteins' stability and disease causing capability is provided. PMID:27160393

  13. Effect of inlet geometry on macrosegregation during the direct chill casting of 7050 alloy billets: experiments and computer modelling

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Eskin, D. G.; Miroux, A.; Subroto, T.; Katgerman, L.

    2012-07-01

    Controlling macrosegregation is one of the major challenges in direct-chill (DC) casting of aluminium alloys. In this paper, the effect of the inlet geometry (which influences the melt distribution) on macrosegregation during the DC casting of 7050 alloy billets was studied experimentally and by using 2D computer modelling. The ALSIM model was used to determine the temperature and flow patterns during DC casting. The results from the computer simulations show that the sump profiles and flow patterns in the billet are strongly influenced by the melt flow distribution determined by the inlet geometry. These observations were correlated to the actual macrosegregation patterns found in the as-cast billets produced by having two different inlet geometries. The macrosegregation analysis presented here may assist in determining the critical parameters to consider for improving the casting of 7XXX aluminium alloys.

  14. Geometry, analysis, and computation in mathematics and applied sciences. Final report

    SciTech Connect

    Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

    1995-12-31

    Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

  15. Peer Interactions in a Computer Lab: Reflections on Results of a Case Study Involving Web-Based Dynamic Geometry Sketches

    ERIC Educational Resources Information Center

    Sinclair, Margaret P.

    2005-01-01

    A case study, originally set up to identify and describe some benefits and limitations of using dynamic web-based geometry sketches, provided an opportunity to examine peer interactions in a lab. Since classes were held in a computer lab, teachers and pairs faced the challenges of working and communicating in a lab environment. Research has shown…

  16. The Theory of Transactional Distance as a Framework for the Analysis of Computer-Aided Teaching of Geometry

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Dagdilelis, Vassilios

    2006-01-01

    In this paper, difficulties of students in the case of computer-mediated teaching of geometry in a traditional classroom are considered within the framework of "transactional distance", a concept well known in distance education. The main interest of this paper is to record and describe in detail the different forms of "distance" during students'…

  17. Using Video Modeling via Handheld Computers to Improve Geometry Skills for High School Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Cihak, David F.; Bowlin, Tammy

    2009-01-01

    The researchers examined the use of video modeling by means of a handheld computer as an alternative instructional delivery system for learning basic geometry skills. Three high school students with learning disabilities participated in this study. Through video modeling, teacher-developed video clips showing step-by-step problem solving processes…

  18. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  19. Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry

    SciTech Connect

    Salari, K; McWherter-Payne, M

    2003-09-15

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  20. Computational flow modeling of a simplified integrated tractor-trailer geometry.

    SciTech Connect

    McWherter-Payne, Mary Anna; Salari, Kambiz

    2003-09-01

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  1. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.

  2. Effect of ocular shape and vascular geometry on retinal hemodynamics: a computational model.

    PubMed

    Dziubek, Andrea; Guidoboni, Giovanna; Harris, Alon; Hirani, Anil N; Rusjan, Edmond; Thistleton, William

    2016-08-01

    A computational model for retinal hemodynamics accounting for ocular curvature is presented. The model combines (i) a hierarchical Darcy model for the flow through small arterioles, capillaries and small venules in the retinal tissue, where blood vessels of different size are comprised in different hierarchical levels of a porous medium; and (ii) a one-dimensional network model for the blood flow through retinal arterioles and venules of larger size. The non-planar ocular shape is included by (i) defining the hierarchical Darcy flow model on a two-dimensional curved surface embedded in the three-dimensional space; and (ii) mapping the simplified one-dimensional network model onto the curved surface. The model is solved numerically using a finite element method in which spatial domain and hierarchical levels are discretized separately. For the finite element method, we use an exterior calculus-based implementation which permits an easier treatment of non-planar domains. Numerical solutions are verified against suitably constructed analytical solutions. Numerical experiments are performed to investigate how retinal hemodynamics is influenced by the ocular shape (sphere, oblate spheroid, prolate spheroid and barrel are compared) and vascular architecture (four vascular arcs and a branching vascular tree are compared). The model predictions show that changes in ocular shape induce non-uniform alterations of blood pressure and velocity in the retina. In particular, we found that (i) the temporal region is affected the least by changes in ocular shape, and (ii) the barrel shape departs the most from the hemispherical reference geometry in terms of associated pressure and velocity distributions in the retinal microvasculature. These results support the clinical hypothesis that alterations in ocular shape, such as those occurring in myopic eyes, might be associated with pathological alterations in retinal hemodynamics. PMID:26445874

  3. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  4. Validation of Methods for Computational Catalyst Design: Geometries, Structures, and Energies of Neutral and Charged Silver Clusters

    SciTech Connect

    Duanmu, Kaining; Truhlar, Donald G.

    2015-04-30

    We report a systematic study of small silver clusters, Agn, Agn+, and Agn–, n = 1–7. We studied all possible isomers of clusters with n = 5–7. We tested 42 exchange–correlation functionals, and we assess these functionals for their accuracy in three respects: geometries (quantitative prediction of internuclear distances), structures (the nature of the lowest-energy structure, for example, whether it is planar or nonplanar), and energies. We find that the ingredients of exchange–correlation functionals are indicators of their success in predicting geometries and structures: local exchange–correlation functionals are generally better than hybrid functionals for geometries; functionals depending on kinetic energy density are the best for predicting the lowest-energy isomer correctly, especially for predicting two-dimensional to three-dimenstional transitions correctly. The accuracy for energies is less sensitive to the ingredient list. Our findings could be useful for guiding the selection of methods for computational catalyst design.

  5. Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches.

    PubMed

    Rutkowski, Tomasz M

    2016-07-01

    A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification. PMID:27194241

  6. Computations of Viscous Flows in Complex Geometries Using Multiblock Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Ameri, Ali A.

    1995-01-01

    Generating high quality, structured, continuous, body-fitted grid systems (multiblock grid systems) for complicated geometries has long been a most labor-intensive and frustrating part of simulating flows in complicated geometries. Recently, new methodologies and software have emerged that greatly reduce the human effort required to generate high quality multiblock grid systems for complicated geometries. These methods and software require minimal input form the user-typically, only information about the topology of the block structure and number of grid points. This paper demonstrates the use of the new breed of multiblock grid systems in simulations of internal flows in complicated geometries. The geometry used in this study is a duct with a sudden expansion, a partition, and an array of cylindrical pins. This geometry has many of the features typical of internal coolant passages in turbine blades. The grid system used in this study was generated using a commercially available grid generator. The simulations were done using a recently developed flow solver, TRAF3D.MB, that was specially designed to use multiblock grid systems.

  7. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  8. Parallel methods for the computation of unsteady separated flows around complex geometries

    NASA Astrophysics Data System (ADS)

    Souliez, Frederic Jean

    A numerical investigation of separated flows is made using unstructured meshes around complex geometries. The flow data in the wake of a 60-degree vertex angle cone are analyzed for various versions of our finite volume solver, including a generic version without turbulence model, and a Large Eddy Simulation model with different sub-grid scale constant values. While the primary emphasis is on the comparison of the results against experimental data, the solution is also used as a benchmark tool for an aeroacoustic post-processing utility combined with the Ffowcs Williams-Hawkings (FW-H) equation. A concurrent study is performed of the flow around two 4-wheel landing gear models, with the difference residing in the addition of two additional support struts. These unsteady calculations are used to provide aerodynamic and aeroacoustic data. The impact of the two configurations on the forces as well as on the acoustic near- and far-field is evaluated with the help of the above-mentioned aeroacoustic program. For both the cone and landing gear runs, parallel versions of the flow solver and of the FW-H utility are used via the implementation of the Message Passing Interface (MPI) library, resulting in very good scaling performance. The speed-up results for these cases are described for different platforms including inexpensive Beowulf-class clusters, which are the computing workhorse for the present numerical investigation. Furthermore, the analysis of the flow around a Bell 214 Super Transport (ST) fuselage is presented. A mesh sensitivity analysis is compared against experimental and numerical results collected by the helicopter manufacturer. Parameters such as surface pressure coefficient, lift and drag are evaluated resulting from both steady-state and time-accurate simulations. Various flight conditions are tested, with a slightly negative angle of attack, a large positive angle of attack and a positive yaw angle, all of which resulting in massive flow separation

  9. Geometry acquisition and grid generation: Recent experiences with complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Gatzke, Timothy D.; Labozzetta, Walter F.; Cooley, John W.; Finfrock, Gregory P.

    1992-01-01

    Important issues involved in working with complex geometries are discussed. Approaches taken to address complex geometry issues in the McDonnell Aircraft Computational Grid System and related geometry processing tools are discussed. The efficiency of acquiring a suitable geometry definition, the need to manipulate the geometry, and the time and skill level required to generate the grid while preserving geometric fidelity are discussed.

  10. Fully Integrated Approach to Compute Vibrationally Resolved Optical Spectra: From Small Molecules to Macrosystems.

    PubMed

    Barone, Vincenzo; Bloino, Julien; Biczysko, Malgorzata; Santoro, Fabrizio

    2009-03-10

    A general and effective time-independent approach to compute vibrationally resolved electronic spectra from first principles has been integrated into the Gaussian computational chemistry package. This computational tool offers a simple and easy-to-use way to compute theoretical spectra starting from geometry optimization and frequency calculations for each electronic state. It is shown that in such a way it is straightforward to combine calculation of Franck-Condon integrals with any electronic computational model. The given examples illustrate the calculation of absorption and emission spectra, all in the UV-vis region, of various systems from small molecules to large ones, in gas as well as in condensed phases. The computational models applied range from fully quantum mechanical descriptions to discrete/continuum quantum mechanical/molecular mechanical/polarizable continuum models. PMID:26610221

  11. A geometric calibration method for inverse geometry computed tomography using P-matrices

    NASA Astrophysics Data System (ADS)

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-03-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  12. Along-strike complex geometry of subduction zones - an experimental approach

    NASA Astrophysics Data System (ADS)

    Midtkandal, I.; Gabrielsen, R. H.; Brun, J.-P.; Huismans, R.

    2012-04-01

    Recent knowledge of the great geometric and dynamic complexity insubduction zones, combined with new capacity for analogue mechanical and numerical modeling has sparked a number of studies on subduction processes. Not unexpectedly, such models reveal a complex relation between physical conditions during subduction initiation, strength profile of the subducting plate, the thermo-dynamic conditions and the subduction zones geometries. One rare geometrical complexity of subduction that remains particularly controversial, is the potential for polarity shift in subduction systems. The present experiments were therefore performed to explore the influence of the architecture, strength and strain velocity on complexities in subduction zones, focusing on along-strike variation of the collision zone. Of particular concern were the consequences for the geometry and kinematics of the transition zones between segments of contrasting subduction direction. Although the model design to some extent was inspired by the configuration along the Iberian - Eurasian suture zone, the results are also of significance for other orogens with complex along-strike geometries. The experiments were set up to explore the initial state of subduction only, and were accordingly terminated before slab subduction occurred. The model wasbuilt from layers of silicone putty and sand, tailored to simulate the assumed lithospheric geometries and strength-viscosity profiles along the plate boundary zone prior to contraction, and comprises two 'continental' plates separated by a thinner 'oceanic' plate that represents the narrow seaway. The experiment floats on a substrate of sodiumpolytungstate, representing mantle. 24 experimental runs were performed, varying the thickness (and thus strength) of the upper mantle lithosphere, as well as the strain rate. Keeping all other parameters identical for each experiment, the models were shortened by a computer-controlled jackscrew while time-lapse images were

  13. Computer-aided three-dimensional analysis of the small-geometry effects of a MOSFET

    SciTech Connect

    Hsueh, K.L.K.

    1987-01-01

    The 3-D effects of a small-geometry MOSFET can only be analyzed accurately by using a 3-D simulator. A 3-D MOSFET simulator, called MICROMOS, therefore, was developed for this purpose. The history of numerical analysis used to simulate semiconductor devices was reviewed. Numerical methods, their mathematical background, and the iteration techniques commonly used in the semiconductor simulation are also discussed. The three-dimensional graphic results of the numerical analysis give valuable information for the understanding the physics of the small-geometry effects in a VLSI MOSFET. A mutual modulation of the depletion depth underneath the gate is described. This leads to an accurate 3-D analytical model for the prediction of the threshold voltage of a small-geometry MOSFET with a fully-recessed isolation oxide structure. Also, there is a mutual modulation between the transverse electric field and its two perpendicular components. This modulation was proven to be the source of the small-geometry effects of a small-size MOSFET. The enhanced drain-induced barrier lowering (DIBL) due to the scaling of the device is also presented.

  14. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection.

    PubMed

    Ding, Hong; Dwaraknath, Shyam S; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available. PMID:27145398

  15. High-resolution structure of an HIV zinc fingerlike domain via a new NMR-based distance geometry approach

    SciTech Connect

    Summers, M.F.; South, T.L.; Kim, B. ); Hare, D.R. )

    1990-01-16

    A new method is described for determining molecular structures from NMR data. The approach utilizes 2D NOESY back-calculations to generate simulated spectra for structures obtained from distance geometry (DG) computations. Comparison of experimental and back-calculated spectra, including analysis of cross-peak buildup and auto-peak decay with increasing mixing time, provides a quantitative measure of the consistence between the experimental data and generated structures and allows for use of tighter interproton distance constraints. For the first time, the goodness of the generated structures is evaluated on the basis of their consistence with the actual experimental data rather than on the basis of consistence with other generated structures. This method is applied to the structure determination of an 18-residue peptide with an amino acid sequence comprising the first zinc fingerlike domain from the gag protein p55 of HIV. This is the first structure determination to atomic resolution for a retroviral zinc fingerlike complex. The peptide (Zn(p55F1)) exhibits a novel folding pattern that includes type I and type II NH-S tight turns and is stabilized both by coordination of the three Cys and one His residues to zinc and by extensive internal hydrogen bonding. The backbone folding is significant different from that of a classical DNA-binding zinc finger. The side chains of conservatively substituted Phe and Ile residues implicated in genomic RNA recognition form a hydrophobic patch on the peptide surface.

  16. CasimirSim - A Tool to Compute Casimir Polder Forces for Nontrivial 3D Geometries

    SciTech Connect

    Sedmik, Rene; Tajmar, Martin

    2007-01-30

    The so-called Casimir effect is one of the most interesting macro-quantum effects. Being negligible on the macro-scale it becomes a governing factor below structure sizes of 1 {mu}m where it accounts for typically 100 kN m-2. The force does not depend on gravity, or electric charge but solely on the materials properties, and geometrical shape. This makes the effect a strong candidate for micro(nano)-mechanical devices M(N)EMS. Despite a long history of research the theory lacks a uniform description valid for arbitrary geometries which retards technical application. We present an advanced state-of-the-art numerical tool overcoming all the usual geometrical restrictions, capable of calculating arbitrary 3D geometries by utilizing the Casimir Polder approximation for the Casimir force.

  17. CFD study of natural convection mixing in a steam generator mock-up: Comparison between full geometry and porous media approaches

    SciTech Connect

    Dehbi, A.; Badreddine, H.

    2012-07-01

    In CFD simulations of flow mixing in a steam generator (SG) during natural circulation, one is faced with the problem of representing the thousands of SG U-tubes. Typically simplifications are made to render the problem computationally tractable. In particular, one or a number of tubes are lumped in one volume which is treated as a single porous medium. This approach dramatically reduces the computational size of the problem and hence simulation time. In this work, we endeavor to investigate the adequacy of this approach by performing two separate simulations of flow in a mock-up with 262 U-tubes, i.e. one in which the porous media model is used for the tube bundle, and another in which the full geometry is represented. In both simulations, the Reynolds Stress (RMS) model of turbulence is used. We show that in steady state conditions, the porous media treatment yields results which are comparable to those of the full geometry representation (temperature distribution, recirculation ratio, hot plume spread, etc). Hence, the porous media approach can be extended with a good degree of confidence to the full scale SG. (authors)

  18. Computer-aided evaluation of the railway track geometry on the basis of satellite measurements

    NASA Astrophysics Data System (ADS)

    Specht, Cezary; Koc, Władysław; Chrostowski, Piotr

    2016-05-01

    In recent years, all over the world there has been a period of intensive development of GNSS (Global Navigation Satellite Systems) measurement techniques and their extension for the purpose of their applications in the field of surveying and navigation. Moreover, in many countries a rising trend in the development of rail transportation systems has been noticed. In this paper, a method of railway track geometry assessment based on mobile satellite measurements is presented. The paper shows the implementation effects of satellite surveying railway geometry. The investigation process described in the paper is divided on two phases. The first phase is the GNSS mobile surveying and the analysis obtained data. The second phase is the analysis of the track geometry using the flat coordinates from the surveying. The visualization of the measured route, separation and quality assessment of the uniform geometric elements (straight sections, arcs), identification of the track polygon (main directions and intersection angles) are discussed and illustrated by the calculation example within the article.

  19. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  20. Alternative Computational Approaches for Probalistic Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Moore, N. R.; Grigoriu, M.

    1995-01-01

    The feasibility is discussed for alternative methods of direct Monte Carlo simulation for failure probability computations. First and second order reliability methods are used for fatigue crack growth and low cycle fatigue structural failure modes to illustrate typical problems.

  1. An Exploratory Study of Teachers' Application of Inductive Approaches in Developing an Awareness of Geometry with Fifth and Sixth Grade Children.

    ERIC Educational Resources Information Center

    Schloff, Charles E.

    The purpose of this study was to design a series of inservice seminars with teaching guides in geometry for fifth and sixth grade teachers in order to improve student competence in geometry. The guides stressed the use of a discovery approach. Twelve teachers and 293 children participated. Students took a pretest in geometry, the teachers attended…

  2. Examining the Impact of an Integrative Method of Using Technology on Students' Achievement and Efficiency of Computer Usage and on Pedagogical Procedure in Geometry

    ERIC Educational Resources Information Center

    Gurevich, Irina; Gurev, Dvora

    2012-01-01

    In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…

  3. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  4. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  5. Deterministic approach for unsteady rarefied flow simulations in complex geometries and its application to gas flows in microsystems

    NASA Astrophysics Data System (ADS)

    Chigullapalli, Sruti

    Micro-electro-mechanical systems (MEMS) are widely used in automotive, communications and consumer electronics applications with microactuators, micro gyroscopes and microaccelerometers being just a few examples. However, in areas where high reliability is critical, such as in aerospace and defense applications, very few MEMS technologies have been adopted so far. Further development of high frequency microsystems such as resonators, RF MEMS, microturbines and pulsed-detonation microengines require improved understanding of unsteady gas dynamics at the micro scale. Accurate computational simulation of such flows demands new approaches beyond the conventional formulations based on the macroscopic constitutive laws. This is due to the breakdown of the continuum hypothesis in the presence of significant non-equilibrium and rarefaction because of large gradients and small scales, respectively. More generally, the motion of molecules in a gas is described by the kinetic Boltzmann equation which is valid for arbitrary Knudsen numbers. However, due to the multidimensionality of the phase space and the complex non-linearity of the collision term, numerical solution of the Boltzmann equation is challenging for practical problems. In this thesis a fully deterministic, as opposed to a statistical, finite volume based three-dimensional solution of Boltzmann ES-BGK model kinetic equation is formulated to enable simulations of unsteady rarefied flows. The main goal of this research is to develop an unsteady rarefied solver integrated with finite volume method (FVM) solver in MEMOSA (MEMS Overall Simulation Administrator) developed by PRISM: NNSA center for Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM) at Purdue and apply it to study micro-scale gas damping. Formulation and verification of finite volume method for unsteady rarefied flow solver based on Boltzmann-ESBGK equations in arbitrary three-dimensional geometries are presented. The solver is

  6. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  7. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  8. Computed tomography in trauma: An atlas approach

    SciTech Connect

    Toombs, B.D.; Sandler, C.

    1986-01-01

    This book discussed computed tomography in trauma. The text is organized according to mechanism of injury and site of injury. In addition to CT, some correlation with other imaging modalities is included. Blunt trauma, penetrating trauma, complications and sequelae of trauma, and use of other modalities are covered.

  9. An Approach to Developing Computer Catalogs

    ERIC Educational Resources Information Center

    MacDonald, Robin W.; Elrod, J. McRee

    1973-01-01

    A method of developing computer catalogs is proposed which does not require unit card conversion but rather the accumulation of data from operating programs. It is proposed that the bibliographic and finding functions of the catalog be separated, with the latter being the first automated. (8 references) (Author)

  10. Designing Your Computer Curriculum: A Process Approach.

    ERIC Educational Resources Information Center

    Wepner, Shelley; Kramer, Steven

    Four essential steps for integrating computer technology into a school districts' reading curriculum--needs assessment, planning, implementation, and evaluation--are described in terms of what educators can do at the district and building level to facilitate optimal instructional conditions for students. With regard to needs assessment,…

  11. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  12. Computational Analysis of an effect of aerodynamic pressure on the side view mirror geometry

    NASA Astrophysics Data System (ADS)

    Murukesavan, P.; Mu'tasim, M. A. N.; Sahat, I. M.

    2013-12-01

    This paper describes the evaluation of aerodynamic flow effects on side mirror geometry for a passenger car using ANSYS Fluent CFD simulation software. Results from analysis of pressure coefficient on side view mirror designs is evaluated to analyse the unsteady forces that cause fluctuations to mirror surface and image blurring. The fluctuation also causes drag forces that increase the overall drag coefficient, with an assumption resulting in higher fuel consumption and emission. Three features of side view mirror design were investigated with two input velocity parameters of 17 m/s and 33 m/s. Results indicate that the half-sphere design shows the most effective design with less pressure coefficient fluctuation and drag coefficient.

  13. Elasticity and geometry: a computational model of the Heineke-Mikulicz strictureplasty.

    PubMed

    Tsamis, Alkiviadis; Pocivavsek, Luka; Vorp, David A

    2014-11-01

    Crohn's disease is a challenging inflammatory process with a propensity for focal gastro-intestinal tract inflammation and stricture. Surgically, Crohn's is often treated with resection. However, a subtype of diffuse disease with multiple strictures is treated by strictureplasty procedures in hope of avoiding short-gut syndrome. Prior work by Pocivavsek et al. defined the geometry of a Heineke-Mikulicz strictureplasty. Here, we bring this analysis one step closer to clinical and biological relevance by calculating the mechanical stresses and strains that the strictureplasty deformation generates on a model intestinal wall. The small bowel is simulated as a linearly elastic isotropic deformable cylindrical shell using finite element modeling. Data show a divergence in elastic response between the anti-mesenteric and mesenteric halves. The anti-mesenteric surface shows a bending dominated elastic response that correlates with the prior purely geometric analysis. However, the mesenteric half is not a neutral bystander during strictureplasty formation, as geometric arguments predict. Strong in-plane stretching strains develop in a rim around the image of the transverse closure, which may impact local perfusion and serve as sites of disease recurrence. Lastly, nearly all the deformation energy is stored in the central vertex stitch, placing this part at highest risk of dehiscence. This study enhances our understanding of mechanical response in complex nonlinear cylindrical geometries like the surgically manipulated intestinal tract. The developed framework serves as a platform for future addition of more complex clinically relevant parameters to our model, including real tissue properties, anisotropy, blood supply modeling, and patient deriver anatomic factors. PMID:24671519

  14. Changes in root canal geometry after preparation assessed by high-resolution computed tomography.

    PubMed

    Peters, O A; Laib, A; Göhring, T N; Barbakow, F

    2001-01-01

    Root canal morphology changes during canal preparation, and these changes may vary depending on the technique used. Such changes have been studied in vitro by measuring cross-sections of canals before and after preparation. This current study used nondestructive high-resolution scanning tomography to assess changes in the canals' paths after preparation. A microcomputed tomography scanner (cubic resolution 34 microm) was used to analyze 18 canals in 6 extracted maxillary molars. Canals were scanned before and after preparation using either K-Files, Lightspeed, or ProFile .04 rotary instruments. A special mounting device enabled precise repositioning and scanning of the specimens after preparation. Differences in surface area (deltaA in mm2) and volume (deltaV in mm3) of each canal before and after preparation were calculated using custom-made software. deltaV ranged from 0.64 to 2.86, with a mean of 1.61 +/- 0.7, whereas deltaA varied from 0.72 to 9.66, with a mean of 4.16 +/- 2.63. Mean deltaV and deltaA for the K-File, ProFile, and Lightspeed groups were 1.28 +/- 0.57 and 2.58 +/- 1.83; 1.79 +/- 0.66 and 4.86 +/- 2.53; and 1.81 +/- 0.57 and 5.31 +/- 2.98, respectively. Canal anatomy and the effects of preparation were further analyzed using the Structure Model Index and the Transportation of Centers of Mass. Under the conditions of this study variations in canal geometry before preparation had more influence on the changes during preparation than the techniques themselves. Consequently studies comparing the effects of root canal instruments on canal anatomy should also consider details of the preoperative canal geometry. PMID:11487156

  15. A declarative approach to visualizing concurrent computations

    SciTech Connect

    Roman, G.C.; Cox, K.C. )

    1989-10-01

    That visualization can play a key role in the exploration of concurrent computations is central to the ideas presented. Equally important, although given less emphasis, is concern that the full potential of visualization may not be reached unless the art of generating beautiful pictures is rooted in a solid, formally technical foundation. The authors show that program verification provides a formal framework around which such a foundation can be built. Making these ideas a practical reality will require both research and experimentation.

  16. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  17. Acoustic gravity waves: A computational approach

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Dutt, P. K.

    1987-01-01

    This paper discusses numerical solutions of a hyperbolic initial boundary value problem that arises from acoustic wave propagation in the atmosphere. Field equations are derived from the atmospheric fluid flow governed by the Euler equations. The resulting original problem is nonlinear. A first order linearized version of the problem is used for computational purposes. The main difficulty in the problem as with any open boundary problem is in obtaining stable boundary conditions. Approximate boundary conditions are derived and shown to be stable. Numerical results are presented to verify the effectiveness of these boundary conditions.

  18. High performance parallel computing of flows in complex geometries: I. Methods

    NASA Astrophysics Data System (ADS)

    Gourdain, N.; Gicquel, L.; Montagnac, M.; Vermorel, O.; Gazaix, M.; Staffelbach, G.; Garcia, M.; Boussuge, J.-F.; Poinsot, T.

    2009-01-01

    Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.

  19. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  20. A unified approach to computational drug discovery.

    PubMed

    Tseng, Chih-Yuan; Tuszynski, Jack

    2015-11-01

    It has been reported that a slowdown in the development of new medical therapies is affecting clinical outcomes. The FDA has thus initiated the Critical Path Initiative project investigating better approaches. We review the current strategies in drug discovery and focus on the advantages of the maximum entropy method being introduced in this area. The maximum entropy principle is derived from statistical thermodynamics and has been demonstrated to be an inductive inference tool. We propose a unified method to drug discovery that hinges on robust information processing using entropic inductive inference. Increasingly, applications of maximum entropy in drug discovery employ this unified approach and demonstrate the usefulness of the concept in the area of pharmaceutical sciences. PMID:26189935

  1. A 3D Computational fluid dynamics model validation for candidate molybdenum-99 target geometry

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Dale, Greg; Vorobieff, Peter

    2014-11-01

    Molybdenum-99 (99Mo) is the parent product of technetium-99m (99mTc), a radioisotope used in approximately 50,000 medical diagnostic tests per day in the U.S. The primary uses of this product include detection of heart disease, cancer, study of organ structure and function, and other applications. The US Department of Energy seeks new methods for generating 99Mo without the use of highly enriched uranium, to eliminate proliferation issues and provide a domestic supply of 99mTc for medical imaging. For this project, electron accelerating technology is used by sending an electron beam through a series of 100Mo targets. During this process a large amount of heat is created, which directly affects the operating temperature dictated by the tensile stress limit of the wall material. To maintain the required temperature range, helium gas is used as a cooling agent that flows through narrow channels between the target disks. In our numerical study, we investigate the cooling performance on a series of new geometry designs of the cooling channel. This research is supported by Los Alamos National Laboratory.

  2. Radiation characteristics of selected long wire antennas as a function of geometry using computer modeling techniques

    NASA Astrophysics Data System (ADS)

    Gillespie, Robert J., Sr.

    1986-12-01

    This thesis, sponsored by the Marine Corps Development and Education Command, Quantico, Va., examines the far field patterns of five high frequency long wire antenna configurations through the use of the Numerical Electromagnetics Code (NEC). Lossy ground and the effects of variations made to these structures are considered. The resulting far field patterns are contained in the appendix. The antenna configurations vary in length from 1.87 to 17.19 wavelengths and in their height above ground from 0.103 to 0.610 wavelengths. Variations in the antennas end-regions include: the use of a ground rod or radial screen attached to the transmitter, terminating the far end of the antenna, and varying the shape of the transmitter from a small box (radio-sized) to a large (vehicle-sized) configuration. It is concluded that both the antenna height and length determine the far field geometry, and that end-region variations also impact, though to a lesser degree, on the pattern. Tables of comparative results are provided.

  3. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment. PMID:19623488

  4. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  5. A Pilot Study of a Cultural-Historical Approach to Teaching Geometry

    ERIC Educational Resources Information Center

    Rowlands, Stuart

    2010-01-01

    There appears to be a widespread assumption that deductive geometry is inappropriate for most learners and that they are incapable of engaging with the abstract and rule-governed intellectual processes that became the world's first fully developed and comprehensive formalised system of thought. This article discusses a curriculum initiative that…

  6. Numerical computations of natural convection heat transfer in irregular geometries: Final technical report

    SciTech Connect

    Glakpe, E.K.

    1987-01-23

    The goal of the research program at Howard University is to develop ad document a general purpose computer code that can be used to obtain flow and heat transfer data for the transport or storage of spent fuel configurations. We believe that this work is relevant to DOE/OCRWM storage and transportation programs for the protection of public health and quality of the environment. The computer code is expected to be used to support primarily the following activities: (a) to obtain heat transfer and flow data for the design of sealed storage casks for transport to, and storage at the proposed MRS facility; (b) to obtain heat transfer and flow data for storage of spent fuel assemblies in pools or transportable metal casks at reactor sites. It is therefore proposed that the research work be continued to modify and add to the BODYFIT-1FE code physical models and applicable equations that will simulate realistic configurations of shipping/storage casks.

  7. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  8. Metabolomics and Diabetes: Analytical and Computational Approaches

    PubMed Central

    Sas, Kelli M.; Karnovsky, Alla; Michailidis, George

    2015-01-01

    Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200

  9. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  10. One-eyed stereo: a general approach to modeling 3-d scene geometry.

    PubMed

    Strat, T M; Fischler, M A

    1986-06-01

    A single two-dimensional image is an ambiguous representation of the three-dimensional world¿many different scenes could have produced the same image¿yet the human visual system is ex-tremely successful at recovering a qualitatively correct depth model from this type of representation. Workers in the field of computational vision have devised a number of distinct schemes that attempt to emulate this human capability; these schemes are collectively known as ``shape from...'' methods (e.g., shape from shading, shape from texture, or shape from contour). In this paper we contend that the distinct assumptions made in each of these schemes is tantamount to providing a second (virtual) image of the original scene, and that each of these approaches can be translated into a conventional stereo formalism. In particular, we show that it is frequently possible to structure the problem as one of recovering depth from a stereo pair consisting of the supplied perspective image (the original image) and an hypothesized orthographic image (the virtual image). We present a new algorithm of the form required to accomplish this type of stereo reconstruction task. PMID:21869368

  11. Hyperdimensional computing approach to word sense disambiguation.

    PubMed

    Berster, Bjoern-Toby; Goodwin, J Caleb; Cohen, Trevor

    2012-01-01

    Coping with the ambiguous meanings of words has long been a hurdle for information retrieval and natural language processing systems. This paper presents a new word sense disambiguation approach using high-dimensional binary vectors, which encode meanings of words based on the different contexts in which they occur. In our approach, a randomly constructed vector is assigned to each ambiguous term, and another to each sense of this term. In the context of a sense-annotated training set, a reversible vector transformation is used to combine these vectors, such that both the term and the sense assigned to a context in which the term occurs are encoded into vectors representing the surrounding terms in this context. When a new context is encountered, the information required to disambiguate this term is extracted from the trained semantic vectors for the terms in this context by reversing the vector transformation to recover the correct sense of the term. On repeated experiments using ten-fold cross-validation and a standard test set, we obtained results comparable to the best obtained in previous studies. These results demonstrate the potential of our methodology, and suggest directions for future research. PMID:23304389

  12. Computational study of pulsatile blood flow in prototype vessel geometries of coronary segments.

    PubMed

    Chaniotis, A K; Kaiktsis, L; Katritsis, D; Efstathopoulos, E; Pantos, I; Marmarellis, V

    2010-01-01

    The spatial and temporal distributions of wall shear stress (WSS) in prototype vessel geometries of coronary segments are investigated via numerical simulation, and the potential association with vascular disease and specifically atherosclerosis and plaque rupture is discussed. In particular, simulation results of WSS spatio-temporal distributions are presented for pulsatile, non-Newtonian blood flow conditions for: (a) curved pipes with different curvatures, and (b) bifurcating pipes with different branching angles and flow division. The effects of non-Newtonian flow on WSS (compared to Newtonian flow) are found to be small at Reynolds numbers representative of blood flow in coronary arteries. Specific preferential sites of average low WSS (and likely atherogenesis) were found at the outer regions of the bifurcating branches just after the bifurcation, and at the outer-entry and inner-exit flow regions of the curved vessel segment. The drop in WSS was more dramatic at the bifurcating vessel sites (less than 5% of the pre-bifurcation value). These sites were also near rapid gradients of WSS changes in space and time - a fact that increases the risk of rupture of plaque likely to develop at these sites. The time variation of the WSS spatial distributions was very rapid around the start and end of the systolic phase of the cardiac cycle, when strong fluctuations of intravascular pressure were also observed. These rapid and strong changes of WSS and pressure coincide temporally with the greatest flexion and mechanical stresses induced in the vessel wall by myocardial motion (ventricular contraction). The combination of these factors may increase the risk of plaque rupture and thrombus formation at these sites. PMID:20400349

  13. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  14. Starting Computer Science Using C++ with Objects: A Workable Approach.

    ERIC Educational Resources Information Center

    Connolly, Mary V.

    Saint Mary's College (Indiana) offers a minor program in computer science. The program's introductory computer science class traditionally taught Pascal. The decision to change the introductory programming language to C++ with an object oriented approach was made when it became clear that there were good texts available for beginning students.…

  15. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  16. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  17. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  18. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression

    PubMed Central

    Kroos, Julia M.; Diez, Ibai; Cortes, Jesus M.; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine. PMID:26869913

  19. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression.

    PubMed

    Kroos, Julia M; Diez, Ibai; Cortes, Jesus M; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine. PMID:26869913

  20. Aluminium in Biological Environments: A Computational Approach

    PubMed Central

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  1. A bionic approach to mathematical modeling the fold geometry of deployable reflector antennas on satellites

    NASA Astrophysics Data System (ADS)

    Feng, C. M.; Liu, T. S.

    2014-10-01

    Inspired from biology, this study presents a method for designing the fold geometry of deployable reflectors. Since the space available inside rockets for transporting satellites with reflector antennas is typically cylindrical in shape, and its cross-sectional area is considerably smaller than the reflector antenna after deployment, the cross-sectional area of the folded reflector must be smaller than the available rocket interior space. Membrane reflectors in aerospace are a type of lightweight structure that can be packaged compactly. To design membrane reflectors from the perspective of deployment processes, bionic applications from morphological changes of plants are investigated. Creating biologically inspired reflectors, this paper deals with fold geometry of reflectors, which imitate flower buds. This study uses mathematical formulation to describe geometric profiles of flower buds. Based on the formulation, new designs for deployable membrane reflectors derived from bionics are proposed. Adjusting parameters in the formulation of these designs leads to decreases in reflector area before deployment.

  2. A Pilot Study of a Cultural-Historical Approach to Teaching Geometry

    NASA Astrophysics Data System (ADS)

    Rowlands, Stuart

    2010-01-01

    There appears to be a widespread assumption that deductive geometry is inappropriate for most learners and that they are incapable of engaging with the abstract and rule-governed intellectual processes that became the world’s first fully developed and comprehensive formalised system of thought. This article discusses a curriculum initiative that aims to ‘bring to life’ the major transformative (primary) events in the history of Greek geometry, aims to encourage a meta-discourse that can develop a reflective consciousness and aims to provide an opportunity for the induction into the formalities of proof and to engage with the abstract. The results of a pilot study to see whether 14-15 year old ‘mixed ability’ and 15-16 year old ‘gifted and talented’ students can be meaningfully engaged with two such transformative events are discussed.

  3. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  4. Computation of leading edge film cooling from a CONSOLE geometry (CONverging Slot hOLE)

    NASA Astrophysics Data System (ADS)

    Guelailia, A.; Khorsi, A.; Hamidou, M. K.

    2016-01-01

    The aim of this study is to investigate the effect of mass flow rate on film cooling effectiveness and heat transfer over a gas turbine rotor blade with three staggered rows of shower-head holes which are inclined at 30° to the spanwise direction, and are normal to the streamwise direction on the blade. To improve film cooling effectiveness, the standard cylindrical holes, located on the leading edge region, are replaced with the converging slot holes (console). The ANSYS CFX has been used for this computational simulation. The turbulence is approximated by a k-ɛ model. Detailed film effectiveness distributions are presented for different mass flow rate. The numerical results are compared with experimental data.

  5. Computational approach to the study of thermal spin crossover phenomena

    SciTech Connect

    Rudavskyi, Andrii; Broer, Ria; Sousa, Carmen

    2014-05-14

    The key parameters associated to the thermally induced spin crossover process have been calculated for a series of Fe(II) complexes with mono-, bi-, and tridentate ligands. Combination of density functional theory calculations for the geometries and for normal vibrational modes, and highly correlated wave function methods for the energies, allows us to accurately compute the entropy variation associated to the spin transition and the zero-point corrected energy difference between the low- and high-spin states. From these values, the transition temperature, T{sub 1/2}, is estimated for different compounds.

  6. A correlative microscopy approach relates microtubule behaviour, local organ geometry, and cell growth at the Arabidopsis shoot apical meristem

    PubMed Central

    Burian, Agata; Uyttewaal, Magalie

    2013-01-01

    Cortical microtubules (CMTs) are often aligned in a particular direction in individual cells or even in groups of cells and play a central role in the definition of growth anisotropy. How the CMTs themselves are aligned is not well known, but two hypotheses have been proposed. According to the first hypothesis, CMTs align perpendicular to the maximal growth direction, and, according to the second, CMTs align parallel to the maximal stress direction. Since both hypotheses were formulated on the basis of mainly qualitative assessments, the link between CMT organization, organ geometry, and cell growth is revisited using a quantitative approach. For this purpose, CMT orientation, local curvature, and growth parameters for each cell were measured in the growing shoot apical meristem (SAM) of Arabidopsis thaliana. Using this approach, it has been shown that stable CMTs tend to be perpendicular to the direction of maximal growth in cells at the SAM periphery, but parallel in the cells at the boundary domain. When examining the local curvature of the SAM surface, no strict correlation between curvature and CMT arrangement was found, which implies that SAM geometry, and presumed geometry-derived stress distribution, is not sufficient to prescribe the CMT orientation. However, a better match between stress and CMTs was found when mechanical stress derived from differential growth was also considered. PMID:24153420

  7. What is Intrinsic Motivation? A Typology of Computational Approaches

    PubMed Central

    Oudeyer, Pierre-Yves; Kaplan, Frederic

    2007-01-01

    Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. PMID:18958277

  8. A new approach for fault identification in computer networks

    NASA Astrophysics Data System (ADS)

    Zhao, Dong; Wang, Tao

    2004-04-01

    Effective management of computer networks has become a more and more difficult job because of the rapid development of the network systems. Fault identification is to find where is the problem of the network and what is it. Data mining generally refers to the process of extracting models from large stores of data. We can use data mining techniques to help us in the fault identification task. Existing approaches of fault identification are introduced and a new approach of fault identification is proposed. This approach improves MSDD algorithm but it need more computation. So some new techniques are used to increase the efficiency.

  9. Computational analysis of a rarefied hypersonic flow over combined gap/step geometries

    NASA Astrophysics Data System (ADS)

    Leite, P. H. M.; Santos, W. F. N.

    2015-06-01

    This work describes a computational analysis of a hypersonic flow over a combined gap/step configuration at zero degree angle of attack, in chemical equilibrium and thermal nonequilibrium. Effects on the flowfield structure due to changes on the step frontal-face height have been investigated by employing the Direct Simulation Monte Carlo (DSMC) method. The work focuses the attention of designers of hypersonic configurations on the fundamental parameter of surface discontinuity, which can have an important impact on even initial designs. The results highlight the sensitivity of the primary flowfield properties, velocity, density, pressure, and temperature due to changes on the step frontal-face height. The analysis showed that the upstream disturbance in the gap/step configuration increased with increasing the frontal-face height. In addition, it was observed that the separation region for the gap/step configuration increased with increasing the step frontal-face height. It was found that density and pressure for the gap/step configuration dramatically increased inside the gap as compared to those observed for the gap configuration, i. e., a gap without a step.

  10. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  11. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  12. Ultrasonic approach for formation of erbium oxide nanoparticles with variable geometries.

    PubMed

    Radziuk, Darya; Skirtach, André; Gessner, Andre; Kumke, Michael U; Zhang, Wei; Möhwald, Helmuth; Shchukin, Dmitry

    2011-12-01

    Ultrasound (20 kHz, 29 W·cm(-2)) is employed to form three types of erbium oxide nanoparticles in the presence of multiwalled carbon nanotubes as a template material in water. The nanoparticles are (i) erbium carboxioxide nanoparticles deposited on the external walls of multiwalled carbon nanotubes and Er(2)O(3) in the bulk with (ii) hexagonal and (iii) spherical geometries. Each type of ultrasonically formed nanoparticle reveals Er(3+) photoluminescence from crystal lattice. The main advantage of the erbium carboxioxide nanoparticles on the carbon nanotubes is the electromagnetic emission in the visible region, which is new and not examined up to the present date. On the other hand, the photoluminescence of hexagonal erbium oxide nanoparticles is long-lived (μs) and enables the higher energy transition ((4)S(3/2)-(4)I(15/2)), which is not observed for spherical nanoparticles. Our work is unique because it combines for the first time spectroscopy of Er(3+) electronic transitions in the host crystal lattices of nanoparticles with the geometry established by ultrasound in aqueous solution of carbon nanotubes employed as a template material. The work can be of great interest for "green" chemistry synthesis of photoluminescent nanoparticles in water. PMID:22022886

  13. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  14. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  15. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  16. Source coding for transmission of reconstructed dynamic geometry: a rate-distortion-complexity analysis of different approaches

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael N.; Cesar, Pablo; Bulterman, Dick C. A.

    2014-09-01

    Live 3D reconstruction of a human as a 3D mesh with commodity electronics is becoming a reality. Immersive applications (i.e. cloud gaming, tele-presence) benefit from effective transmission of such content over a bandwidth limited link. In this paper we outline different approaches for compressing live reconstructed mesh geometry based on distributing mesh reconstruction functions between sender and receiver. We evaluate rate-performance-complexity of different configurations. First, we investigate 3D mesh compression methods (i.e. dynamic/static) from MPEG-4. Second, we evaluate the option of using octree based point cloud compression and receiver side surface reconstruction.

  17. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  18. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  19. A geometry-based particle filtering approach to white matter tractography.

    PubMed

    Savadjiev, Peter; Rathi, Yogesh; Malcolm, James G; Shenton, Martha E; Westin, Carl-Fredrik

    2010-01-01

    We introduce a fibre tractography framework based on a particle filter which estimates a local geometrical model of the underlying white matter tract, formulated as a 'streamline flow' using generalized helicoids. The method is not dependent on the diffusion model, and is applicable to diffusion tensor (DT) data as well as to high angular resolution reconstructions. The geometrical model allows for a robust inference of local tract geometry, which, in the context of the causal filter estimation, guides tractography through regions with partial volume effects. We validate the method on synthetic data and present results on two types in vivo data: diffusion tensors and a spherical harmonic reconstruction of the fibre orientation distribution function (fODF). PMID:20879320

  20. Computational molecular biology approaches to ligand-target interactions

    PubMed Central

    Lupieri, Paola; Nguyen, Chuong Ha Hung; Bafghi, Zhaleh Ghaemi; Giorgetti, Alejandro; Carloni, Paolo

    2009-01-01

    Binding of small molecules to their targets triggers complex pathways. Computational approaches are keys for predictions of the molecular events involved in such cascades. Here we review current efforts at characterizing the molecular determinants in the largest membrane-bound receptor family, the G-protein-coupled receptors (GPCRs). We focus on odorant receptors, which constitute more than half GPCRs. The work presented in this review uncovers structural and energetic aspects of components of the cellular cascade. Finally, a computational approach in the context of radioactive boron-based antitumoral therapies is briefly described. PMID:20119480

  1. An Eulerian approach for computing the finite time Lyapunov exponent

    NASA Astrophysics Data System (ADS)

    Leung, Shingyu

    2011-05-01

    We propose efficient Eulerian methods for approximating the finite-time Lyapunov exponent (FTLE). The idea is to compute the related flow map using the Level Set Method and the Liouville equation. There are several advantages of the proposed approach. Unlike the usual Lagrangian-type computations, the resulting method requires the velocity field defined only at discrete locations. No interpolation of the velocity field is needed. Also, the method automatically stops a particle trajectory in the case when the ray hits the boundary of the computational domain. The computational complexity of the algorithm is O(Δ x-( d+1) ) with d the dimension of the physical space. Since there are the same number of mesh points in the x- t space, the computational complexity of the proposed Eulerian approach is optimal in the sense that each grid point is visited for only O(1) time. We also extend the algorithm to compute the FTLE on a co-dimension one manifold. The resulting algorithm does not require computation on any local coordinate system and is simple to implement even for an evolving manifold.

  2. Spatio-temporal EEG source localization using a three-dimensional subspace FINE approach in a realistic geometry inhomogeneous head model.

    PubMed

    Ding, Lei; He, Bin

    2006-09-01

    The subspace source localization approach, i.e., first principle vectors (FINE), is able to enhance the spatial resolvability and localization accuracy for closely-spaced neural sources from EEG and MEG measurements. Computer simulations were conducted to evaluate the performance of the FINE algorithm in an inhomogeneous realistic geometry head model under a variety of conditions. The source localization abilities of FINE were examined at different cortical regions and at different depths. The present computer simulation results indicate that FINE has enhanced source localization capability, as compared with MUSIC and RAP-MUSIC, when sources are closely spaced, highly noise-contaminated, or inter-correlated. The source localization accuracy of FINE is better, for closely-spaced sources, than MUSIC at various noise levels, i.e., signal-to-noise ratio (SNR) from 6 dB to 16 dB, and RAP-MUSIC at relatively low noise levels, i.e., 6 dB to 12 dB. The FINE approach has been further applied to localize brain sources of motor potentials, obtained during the finger tapping tasks in a human subject. The experimental results suggest that the detailed neural activity distribution could be revealed by FINE. The present study suggests that FINE provides enhanced performance in localizing multiple closely spaced, and inter-correlated sources under low SNR, and may become an important alternative to brain source localization from EEG or MEG. PMID:16941829

  3. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Stanescu, D.; Hussaini, M. Y.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.

  4. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  5. Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  6. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  7. Higher spin approaches to quantum field theory and (psuedo)-Riemannian geometries

    NASA Astrophysics Data System (ADS)

    Hallowell, Karl Evan

    In this thesis, we study a number of higher spin quantum field theories and some of their algebraic and geometric consequences. These theories apply mostly either over constant curvature or more generally symmetric pseudo-Riemannian manifolds. The first part of this dissertation covers a superalgebra coming from a family of particle models over symmetric spaces. These theories are novel in that the symmetries of the (super)algebra osp( Q|2p) are larger and more elaborate than traditional symmetries. We construct useful (super)algebras related to and generalizing old work by Lichnerowicz and describe their role in developing the geometry of massless models with osp(Q|2 p) symmetry. The result is two practical applications of these (super)algebras: (1) a lunch more concise description of a family of higher spin quantum field theories; and (2) an interesting algebraic probe of underlying background geometries. We also consider massive models over constant curvature spaces. We use a radial dimensional reduction process which converts massless models into massive ones over a lower dimensional space. In our case, we take from the family of theories above the particular free, massless model over flat space associated with sp(2, R ) and derive a massive model. In the process, we develop a novel associative algebra, which is a deformation of the original differential operator algebra associated with the sp(2, R ) model. This algebra is interesting in its own right since its operators realize the representation structure of the sp(2, R ) group. The massive model also has implications for a sequence of unusual, "partially massless" theories. The derivation illuminates how reduced degrees of freedom become manifest in these particular models. Finally, we study a Yang-Mills model using an on-shell Poincare Yang-Mills twist of the Maxwell complex along with a non-minimal coupling. This is a special, higher spin case of a quantum field theory called a Yang-Mills detour complex

  8. Antisolvent crystallization approach to construction of CuI superstructures with defined geometries.

    PubMed

    Kozhummal, Rajeevan; Yang, Yang; Güder, Firat; Küçükbayrak, Umut M; Zacharias, Margit

    2013-03-26

    A facile high-yield production of cuprous iodide (CuI) superstructures is reported by antisolvent crystallization using acetonitrile/water as a solvent/antisolvent couple under ambient conditions. In the presence of trace water, the metastable water droplets act as templates to induce the precipitation of hollow spherical CuI superstructures consisting of orderly aligned building blocks after drop coating. With water in excess in the mixed solution, an instant precipitation of CuI random aggregates takes place due to rapid crystal growth via ion-by-ion attachment induced by a strong antisolvent effect. However, this uncontrolled process can be modified by adding polymer polyvinyl pyrrolidone (PVP) in water to restrict the size of initially formed CuI crystal nuclei through the effective coordination effect of PVP. As a result, CuI superstructures with a cuboid geometry are constructed by gradual self-assembly of the small CuI crystals via oriented attachment. The precipitated CuI superstructures have been used as competent adsorbents to remove organic dyes from the water due to their mesocrystal feature. Besides, the CuI superstructures have been applied either as a self-sacrificial template or only as a structuring template for the flexible design of other porous materials such as CuO and TiO2. This system provides an ideal platform to simultaneously investigate the superstructure formation enforced by antisolvent crystallization with and without organic additives. PMID:23441989

  9. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry

    PubMed Central

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623

  10. A Process Approach to Writing with a Computer.

    ERIC Educational Resources Information Center

    Miller-Jacobs, Sandy

    The word processor helps teachers to use the process approach to writing. In using the word processor, the teacher can create tasks on the computer to assist students during each step of the writing process, i.e., prewriting or idea processing, drafting or writing, revising/rewriting or editing, and the publishing process or communicating. Ideas…

  11. Computer-Assisted Approaches to Multiattribute Decision Making.

    ERIC Educational Resources Information Center

    Radcliff, Benjamin

    1986-01-01

    This article evaluates three general types of computer-assisted approaches to multicriteria decision problems in which criteria are attributes as opposed to objectives. Several programs specifically designed for multiattribute problems, as well as spreadsheets and decision-free software, are discussed. (Author/BS)

  12. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  13. CellGeo: a computational platform for the analysis of shape changes in cells with complex geometries.

    PubMed

    Tsygankov, Denis; Bilancia, Colleen G; Vitriol, Eric A; Hahn, Klaus M; Peifer, Mark; Elston, Timothy C

    2014-02-01

    Cell biologists increasingly rely on computer-aided image analysis, allowing them to collect precise, unbiased quantitative results. However, despite great progress in image processing and computer vision, current computational approaches fail to address many key aspects of cell behavior, including the cell protrusions that guide cell migration and drive morphogenesis. We developed the open source MATLAB application CellGeo, a user-friendly computational platform to allow simultaneous, automated tracking and analysis of dynamic changes in cell shape, including protrusions ranging from filopodia to lamellipodia. Our method maps an arbitrary cell shape onto a tree graph that, unlike traditional skeletonization algorithms, preserves complex boundary features. CellGeo allows rigorous but flexible definition and accurate automated detection and tracking of geometric features of interest. We demonstrate CellGeo's utility by deriving new insights into (a) the roles of Diaphanous, Enabled, and Capping protein in regulating filopodia and lamellipodia dynamics in Drosophila melanogaster cells and (b) the dynamic properties of growth cones in catecholaminergic a-differentiated neuroblastoma cells. PMID:24493591

  14. A Comparative Study of Achievement in the Concepts of Fundamentals of Geometry Taught by Computer Managed Individualized Behavioral Objective Instructional Units Versus Lecture-Demonstration Methods of Instruction.

    ERIC Educational Resources Information Center

    Fisher, Merrill Edgar

    The purposes of this study were (1) to identify and compare the effect on student achievement of an individualized computer-managed geometry course, built on behavioral objectives, with traditional instructional methods; and (2) to identify how selected individual aptitudes interact with the two instructional modes. The subjects were…

  15. The Interpretative Flexibility, Instrumental Evolution, and Institutional Adoption of Mathematical Software in Educational Practice: The Examples of Computer Algebra and Dynamic Geometry

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2008-01-01

    This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and…

  16. Connecting Geometry and Chemistry: A Three-Step Approach to Three-Dimensional Thinking

    ERIC Educational Resources Information Center

    Donaghy, Kelley J.; Saxton, Kathleen J.

    2012-01-01

    A three-step active-learning approach is described to enhance the spatial abilities of general chemistry students with respect to three-dimensional molecular drawing and visualization. These activities are used in a medium-sized lecture hall with approximately 150 students in the first semester of the general chemistry course. The first activity…

  17. Suggestions for Teaching Mathematics Using Laboratory Approaches Grades 1-6. 3. Geometry. Experimental Edition.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Elementary Curriculum Development.

    This guide describes activities and materials which can be used in a mathematics laboratory approach to a basic mathematics program for grades 1-6. Thirty-five activities pertaining to geometric concepts are described in terms of purpose, suggested grade levels, materials needed, and procedures. Some concepts included in the guide are: basic…

  18. Diversifying Our Perspectives on Mathematics about Space and Geometry: An Ecocultural Approach

    ERIC Educational Resources Information Center

    Owens, Kay

    2014-01-01

    School mathematics tends to have developed from the major cultures of Asia, the Mediterranean and Europe. However, indigenous cultures in particular may have distinctly different systematic ways of referring to space and thinking mathematically about spatial activity. Their approaches are based on the close link between the environment and…

  19. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine. PMID:21843145

  20. Euclidean Geometry via Programming.

    ERIC Educational Resources Information Center

    Filimonov, Rossen; Kreith, Kurt

    1992-01-01

    Describes the Plane Geometry System computer software developed at the Educational Computer Systems laboratory in Sofia, Bulgaria. The system enables students to use the concept of "algorithm" to correspond to the process of "deductive proof" in the development of plane geometry. Provides an example of the software's capability and compares it to…

  1. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation.

    PubMed

    Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina

    2016-06-21

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence. PMID:27334151

  2. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation

    NASA Astrophysics Data System (ADS)

    Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina

    2016-06-01

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  3. A Computer Program for the Reactivity and Kinetic Parameters for Two-Dimensional Triangular Geometry by Transport Perturbation Theory.

    Energy Science and Technology Software Center (ESTSC)

    1990-04-25

    Version 00 TPTRIA calculates reactivity, effective delayed neutron fractions and mean generation time for two-dimensional triangular geometry on the basis of neutron transport perturbation theory. DIAMANT2 (also designated as CCC-414), is a multigroup two-dimensional discrete ordinates transport code system for triangular and hexagonal geometry which calculates direct and adjoint angular fluxes.

  4. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  5. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  6. The dependence of computed tomography number to relative electron density conversion on phantom geometry and its impact on planned dose.

    PubMed

    Inness, Emma K; Moutrie, Vaughan; Charles, Paul H

    2014-06-01

    A computed tomography number to relative electron density (CT-RED) calibration is performed when commissioning a radiotherapy CT scanner by imaging a calibration phantom with inserts of specified RED and recording the CT number displayed. In this work, CT-RED calibrations were generated using several commercially available phantoms to observe the effect of phantom geometry on conversion to electron density and, ultimately, the dose calculation in a treatment planning system. Using an anthropomorphic phantom as a gold standard, the CT number of a material was found to depend strongly on the amount and type of scattering material surrounding the volume of interest, with the largest variation observed for the highest density material tested, cortical bone. Cortical bone gave a maximum CT number difference of 1,110 when a cylindrical insert of diameter 28 mm scanned free in air was compared to that in the form of a 30 × 30 cm(2) slab. The effect of using each CT-RED calibration on planned dose to a patient was quantified using a commercially available treatment planning system. When all calibrations were compared to the anthropomorphic calibration, the largest percentage dose difference was 4.2 % which occurred when the CT-RED calibration curve was acquired with heterogeneity inserts removed from the phantom and scanned free in air. The maximum dose difference observed between two dedicated CT-RED phantoms was ±2.1 %. A phantom that is to be used for CT-RED calibrations must have sufficient water equivalent scattering material surrounding the heterogeneous objects that are to be used for calibration. PMID:24760737

  7. Protein Engineering by Combined Computational and In Vitro Evolution Approaches.

    PubMed

    Rosenfeld, Lior; Heyne, Michael; Shifman, Julia M; Papo, Niv

    2016-05-01

    Two alternative strategies are commonly used to study protein-protein interactions (PPIs) and to engineer protein-based inhibitors. In one approach, binders are selected experimentally from combinatorial libraries of protein mutants that are displayed on a cell surface. In the other approach, computational modeling is used to explore an astronomically large number of protein sequences to select a small number of sequences for experimental testing. While both approaches have some limitations, their combination produces superior results in various protein engineering applications. Such applications include the design of novel binders and inhibitors, the enhancement of affinity and specificity, and the mapping of binding epitopes. The combination of these approaches also aids in the understanding of the specificity profiles of various PPIs. PMID:27061494

  8. Computational modeling approaches to the dynamics of oncolytic viruses.

    PubMed

    Wodarz, Dominik

    2016-05-01

    Replicating oncolytic viruses represent a promising treatment approach against cancer, specifically targeting the tumor cells. Significant progress has been made through experimental and clinical studies. Besides these approaches, however, mathematical models can be useful when analyzing the dynamics of virus spread through tumors, because the interactions between a growing tumor and a replicating virus are complex and nonlinear, making them difficult to understand by experimentation alone. Mathematical models have provided significant biological insight into the field of virus dynamics, and similar approaches can be adopted to study oncolytic viruses. The review discusses this approach and highlights some of the challenges that need to be overcome in order to build mathematical and computation models that are clinically predictive. WIREs Syst Biol Med 2016, 8:242-252. doi: 10.1002/wsbm.1332 For further resources related to this article, please visit the WIREs website. PMID:27001049

  9. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    SciTech Connect

    Kim, K; Kim, D; Kim, T; Kang, S; Cho, M; Shin, D; Suh, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array which have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A

  10. Style: A Computational and Conceptual Blending-Based Approach

    NASA Astrophysics Data System (ADS)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  11. A computer-aided approach to nonlinear control systhesis

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Anthony, Tobin

    1988-01-01

    The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.

  12. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  13. Computational Drug Repositioning: A Lateral Approach to Traditional Drug Discovery?

    PubMed

    Sahu, Niteshkumar U; Kharkar, Prashant S

    2016-01-01

    Computational drug repositioning is popular in academia and pharmaceutical industry globally. The repositioning hypotheses, generated using a variety of computational methods, can be quickly tested experimentally. Several success stories have emerged in the past decade or so. Newer concepts and methods such as drug profile matching are being tried to address the limitations of current computational repositioning methods. The trend is shifting from earlier small-scale to large-scale or global-scale repositioning applications. Other related approaches such as prediction of molecular targets for novel molecules, prediction of side-effect profiles of new molecular entities (NMEs), etc., are applied routinely. The current article focuses on state-of-the-art of computational drug repositioning field with the help of relevant examples and case studies. This 'lateral' approach has significant potential to bring down the time and cost of the awfully expensive drug discovery research and clinical development. The persistence and perseverance in the successful application of these methods is likely to be paid off in near future. PMID:26881717

  14. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  15. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  16. A computational approach to developing mathematical models of polyploid meiosis.

    PubMed

    Rehmsmeier, Marc

    2013-04-01

    Mathematical models of meiosis that relate offspring to parental genotypes through parameters such as meiotic recombination frequency have been difficult to develop for polyploids. Existing models have limitations with respect to their analytic potential, their compatibility with insights into mechanistic aspects of meiosis, and their treatment of model parameters in terms of parameter dependencies. In this article I put forward a computational approach to the probabilistic modeling of meiosis. A computer program enumerates all possible paths through the phases of replication, pairing, recombination, and segregation, while keeping track of the probabilities of the paths according to the various parameters involved. Probabilities for classes of genotypes or phenotypes are added, and the resulting formulas are simplified by the symbolic-computation system Mathematica. An example application to autotetraploids results in a model that remedies the limitations of previous models mentioned above. In addition to the immediate implications, the computational approach presented here can be expected to be useful through opening avenues for modeling a host of processes, including meiosis in higher-order ploidies. PMID:23335332

  17. Reservoir Computing approach to Great Lakes water level forecasting

    NASA Astrophysics Data System (ADS)

    Coulibaly, Paulin

    2010-02-01

    SummaryThe use of echo state network (ESN) for dynamical system modeling is known as Reservoir Computing and has been shown to be effective for a number of applications, including signal processing, learning grammatical structure, time series prediction and motor/system control. However, the performance of Reservoir Computing approach on hydrological time series remains largely unexplored. This study investigates the potential of ESN or Reservoir Computing for long-term prediction of lake water levels. Great Lakes water levels from 1918 to 2005 are used to develop and evaluate the ESN models. The forecast performance of the ESN-based models is compared with the results obtained from two benchmark models, the conventional recurrent neural network (RNN) and the Bayesian neural network (BNN). The test results indicate a strong ability of ESN models to provide improved lake level forecasts up to 10-month ahead - suggesting that the inherent structure and innovative learning approach of the ESN is suitable for hydrological time series modeling. Another particular advantage of ESN learning approach is that it simplifies the network training complexity and avoids the limitations inherent to the gradient descent optimization method. Overall, it is shown that the ESN can be a good alternative method for improved lake level forecasting, performing better than both the RNN and the BNN on the four selected Great Lakes time series, namely, the Lakes Erie, Huron-Michigan, Ontario, and Superior.

  18. A computer vision-based approach for structural displacement measurement

    NASA Astrophysics Data System (ADS)

    Ji, Yunfeng

    2010-04-01

    Along with the incessant advancement in optics, electronics and computer technologies during the last three decades, commercial digital video cameras have experienced a remarkable evolution, and can now be employed to measure complex motions of objects with sufficient accuracy, which render great assistance to structural displacement measurement in civil engineering. This paper proposes a computer vision-based approach for dynamic measurement of structures. One digital camera is used to capture image sequences of planar targets mounted on vibrating structures. The mathematical relationship between image plane and real space is established based on computer vision theory. Then, the structural dynamic displacement at the target locations can be quantified using point reconstruction rules. Compared with other tradition displacement measurement methods using sensors, such as accelerometers, linear-variable-differential-transducers (LVDTs) and global position system (GPS), the proposed approach gives the main advantages of great flexibility, a non-contact working mode and ease of increasing measurement points. To validate, four tests of sinusoidal motion of a point, free vibration of a cantilever beam, wind tunnel test of a cross-section bridge model, and field test of bridge displacement measurement, are performed. Results show that the proposed approach can attain excellent accuracy compared with the analytical ones or the measurements using conventional transducers, and proves to deliver an innovative and low cost solution to structural displacement measurement.

  19. Comparison of kinetic and extended magnetohydrodynamics computational models for the linear ion temperature gradient instability in slab geometry

    SciTech Connect

    Schnack, D. D.; Cheng, J.; Parker, S. E.; Barnes, D. C.

    2013-06-15

    We perform linear stability studies of the ion temperature gradient (ITG) instability in unsheared slab geometry using kinetic and extended magnetohydrodynamics (MHD) models, in the regime k{sub ∥}/k{sub ⊥}≪1. The ITG is a parallel (to B) sound wave that may be destabilized by finite ion Larmor radius (FLR) effects in the presence of a gradient in the equilibrium ion temperature. The ITG is stable in both ideal and resistive MHD; for a given temperature scale length L{sub Ti0}, instability requires that either k{sub ⊥}ρ{sub i} or ρ{sub i}/L{sub Ti0} be sufficiently large. Kinetic models capture FLR effects to all orders in either parameter. In the extended MHD model, these effects are captured only to lowest order by means of the Braginskii ion gyro-viscous stress tensor and the ion diamagnetic heat flux. We present the linear electrostatic dispersion relations for the ITG for both kinetic Vlasov and extended MHD (two-fluid) models in the local approximation. In the low frequency fluid regime, these reduce to the same cubic equation for the complex eigenvalue ω=ω{sub r}+iγ. An explicit solution is derived for the growth rate and real frequency in this regime. These are found to depend on a single non-dimensional parameter. We also compute the eigenvalues and the eigenfunctions with the extended MHD code NIMROD, and a hybrid kinetic δf code that assumes six-dimensional Vlasov ions and isothermal fluid electrons, as functions of k{sub ⊥}ρ{sub i} and ρ{sub i}/L{sub Ti0} using a spatially dependent equilibrium. These solutions are compared with each other, and with the predictions of the local kinetic and fluid dispersion relations. Kinetic and fluid calculations agree well at and near the marginal stability point, but diverge as k{sub ⊥}ρ{sub i} or ρ{sub i}/L{sub Ti0} increases. There is good qualitative agreement between the models for the shape of the unstable global eigenfunction for L{sub Ti0}/ρ{sub i}=30 and 20. The results quantify how far

  20. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  1. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122

  2. A computational tool based on voxel geometry for dose reconstruction of a radiological accident due to external exposure.

    PubMed

    Lemosquet, A; Clairand, I; de Carlan, L; Franck, D; Aubineau-Lanièce, I; Bottollier-Depois, J-F

    2004-01-01

    In the case of overexposure to ionising radiation, estimation of the absorbed dose in the organism is an important indicator for evaluating the biological consequences of this exposure. The physical dosimetry approach is based either on real reconstruction of the accident, using physical phantoms, or on calculation techniques. Tools using Monte Carlo simulations associated with geometric models are very powerful since they offer the possibility to simulate faithfully the victim and the environment for dose calculations in various accidental situations. Their work presents a new computational tool, called SESAME, dedicated to dose reconstruction of radiological accidents based on anthropomorphic voxel phantoms built from real medical images of the victim in association with the MCNP Monte Carlo code. The utility was, as a first step, validated for neutrons by experimental means using a physical tissue-equivalent phantom. PMID:15353689

  3. Dynamics and friction drag behavior of viscoelastic flows in complex geometries: A multiscale simulation approach

    NASA Astrophysics Data System (ADS)

    Koppol, Anantha Padmanabha Rao

    Flows of viscoelastic polymeric fluids are of great fundamental and practical interest as polymeric materials for commodity and value-added products are processed typically in a fluid state. The nonlinear coupling between fluid motion and microstructure, which results in highly non-Newtonian theology, memory/relaxation and normal stress development or tension along streamlines, greatly complicates the analysis, design and control of such flows. This has posed tremendous challenges to researchers engaged in developing first principles models and simulations that can accurately and robustly predict the dynamical behavior of polymeric flows. Despite this, the past two decades have witnessed several significant advances towards accomplishing this goal. Yet a problem of fundamental and great pragmatic interest has defied solution to years of ardent research by several groups, namely the relationship between friction drag and flow rate in inertialess flows of highly elastic polymer solutions in complex kinematics flows. First principles-based solution of this long-standing problem in non-Newtonian fluid mechanics is the goal of this research. To achieve our objective, it is essential to develop the capability to perform large-scale multiscale simulations, which integrate continuum-level finite element solvers for the conservation of mass and momentum with fast integrators of stochastic differential equations that describe the evolution of polymer configuration. Hence, in this research we have focused our attention on development of a parallel, multiscale simulation algorithm that is capable of robustly and efficiently simulating complex kinematics flows of dilute polymeric solutions using the first principles based bead-spring chain description of the polymer molecules. The fidelity and computational efficiency of the algorithm has been demonstrated via three benchmark flow problems, namely, the plane Couette flow, the Poiseuille flow and the 4:1:4 axisymmetric

  4. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  5. Computational approaches to detect allosteric pathways in transmembrane molecular machines.

    PubMed

    Stolzenberg, Sebastian; Michino, Mayako; LeVine, Michael V; Weinstein, Harel; Shi, Lei

    2016-07-01

    Many of the functions of transmembrane proteins involved in signal processing and transduction across the cell membrane are determined by allosteric couplings that propagate the functional effects well beyond the original site of activation. Data gathered from breakthroughs in biochemistry, crystallography, and single molecule fluorescence have established a rich basis of information for the study of molecular mechanisms in the allosteric couplings of such transmembrane proteins. The mechanistic details of these couplings, many of which have therapeutic implications, however, have only become accessible in synergy with molecular modeling and simulations. Here, we review some recent computational approaches that analyze allosteric coupling networks (ACNs) in transmembrane proteins, and in particular the recently developed Protein Interaction Analyzer (PIA) designed to study ACNs in the structural ensembles sampled by molecular dynamics simulations. The power of these computational approaches in interrogating the functional mechanisms of transmembrane proteins is illustrated with selected examples of recent experimental and computational studies pursued synergistically in the investigation of secondary active transporters and GPCRs. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26806157

  6. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  7. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  8. Computing electronic structures: A new multiconfiguration approach for excited states

    SciTech Connect

    Cances, Eric . E-mail: cances@cermics.enpc.fr; Galicher, Herve . E-mail: galicher@cermics.enpc.fr; Lewin, Mathieu . E-mail: lewin@cermic.enpc.fr

    2006-02-10

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H {sub 2} molecule.

  9. A GPU-computing Approach to Solar Stokes Profile Inversion

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.; Mighell, Kenneth J.

    2012-09-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  10. Computational approaches in the design of synthetic receptors - A review.

    PubMed

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology. PMID:27566340

  11. A GPU-COMPUTING APPROACH TO SOLAR STOKES PROFILE INVERSION

    SciTech Connect

    Harker, Brian J.; Mighell, Kenneth J. E-mail: mighell@noao.edu

    2012-09-20

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  12. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  13. Computational modeling of an endovascular approach to deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Teplitzky, Benjamin A.; Connolly, Allison T.; Bajwa, Jawad A.; Johnson, Matthew D.

    2014-04-01

    Objective. Deep brain stimulation (DBS) therapy currently relies on a transcranial neurosurgical technique to implant one or more electrode leads into the brain parenchyma. In this study, we used computational modeling to investigate the feasibility of using an endovascular approach to target DBS therapy. Approach. Image-based anatomical reconstructions of the human brain and vasculature were used to identify 17 established and hypothesized anatomical targets of DBS, of which five were found adjacent to a vein or artery with intraluminal diameter ≥1 mm. Two of these targets, the fornix and subgenual cingulate white matter (SgCwm) tracts, were further investigated using a computational modeling framework that combined segmented volumes of the vascularized brain, finite element models of the tissue voltage during DBS, and multi-compartment axon models to predict the direct electrophysiological effects of endovascular DBS. Main results. The models showed that: (1) a ring-electrode conforming to the vessel wall was more efficient at neural activation than a guidewire design, (2) increasing the length of a ring-electrode had minimal effect on neural activation thresholds, (3) large variability in neural activation occurred with suboptimal placement of a ring-electrode along the targeted vessel, and (4) activation thresholds for the fornix and SgCwm tracts were comparable for endovascular and stereotactic DBS, though endovascular DBS was able to produce significantly larger contralateral activation for a unilateral implantation. Significance. Together, these results suggest that endovascular DBS can serve as a complementary approach to stereotactic DBS in select cases.

  14. Identifying Pathogenicity Islands in Bacterial Pathogenomics Using Computational Approaches

    PubMed Central

    Che, Dongsheng; Hasan, Mohammad Shabbir; Chen, Bernard

    2014-01-01

    High-throughput sequencing technologies have made it possible to study bacteria through analyzing their genome sequences. For instance, comparative genome sequence analyses can reveal the phenomenon such as gene loss, gene gain, or gene exchange in a genome. By analyzing pathogenic bacterial genomes, we can discover that pathogenic genomic regions in many pathogenic bacteria are horizontally transferred from other bacteria, and these regions are also known as pathogenicity islands (PAIs). PAIs have some detectable properties, such as having different genomic signatures than the rest of the host genomes, and containing mobility genes so that they can be integrated into the host genome. In this review, we will discuss various pathogenicity island-associated features and current computational approaches for the identification of PAIs. Existing pathogenicity island databases and related computational resources will also be discussed, so that researchers may find it to be useful for the studies of bacterial evolution and pathogenicity mechanisms. PMID:25437607

  15. Slide Star: An Approach to Videodisc/Computer Aided Instruction

    PubMed Central

    McEnery, Kevin W.

    1984-01-01

    One of medical education's primary goals is for the student to be proficient in the gross and microscopic identification of disease. The videodisc, with its storage capacity of up to 54,000 photomicrographs is ideally suited to assist in this educational process. “Slide Star” is a method of interactive instruction which is designed for use in any subject where it is essential to identify visual material. The instructional approach utilizes a computer controlled videodisc to display photomicrographs. In the demonstration program, these are slides of normal blood cells. The program is unique in that the instruction is created by the student's commands manipulating the photomicrograph data base. A prime feature is the use of computer generated multiple choice questions to reinforce the learning process.

  16. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    PubMed

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone". PMID:23405595

  17. Analytical and computational approaches to define the Aspergillus niger secretome

    SciTech Connect

    Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.

  18. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  19. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  20. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  1. A pencil beam approach to proton computed tomography

    SciTech Connect

    Rescigno, Regina Bopp, Cécile; Rousseau, Marc; Brasse, David

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  3. The geometric approach to quantum correlations: computability versus reliability

    NASA Astrophysics Data System (ADS)

    Tufarelli, Tommaso; MacLean, Tom; Girolami, Davide; Vasile, Ruggero; Adesso, Gerardo

    2013-07-01

    We propose a modified metric based on the Hilbert-Schmidt norm and adopt it to define a rescaled version of the geometric measure of quantum discord. Such a measure is found not to suffer from pathological dependence on state purity. Although the employed metric is still non-contractive under quantum operations, we show that the resulting indicator of quantum correlations is in agreement with other bona fide discord measures in a number of physical examples. We present a critical assessment of the requirements of reliability versus computability when approaching the task of quantifying, or measuring, general quantum correlations in a bipartite state.

  4. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  5. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Junaid, Ali Khan; Muhammad, Asif Zahoor Raja; Ijaz Mansoor, Qureshi

    2011-02-01

    We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  6. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  7. A Computer Code for 2-D Transport Calculations in x-y Geometry Using the Interface Current Method.

    Energy Science and Technology Software Center (ESTSC)

    1990-12-01

    Version 00 RICANT performs 2-dimensional neutron transport calculations in x-y geometry using the interface current method. In the interface current method, the angular neutron currents crossing region surfaces are expanded in terms of the Legendre polynomials in the two half-spaces made by the region surfaces.

  8. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  9. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  10. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  11. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    SciTech Connect

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm) and the field integral of the solution (L{sup 2} norm).

  12. A General Computational Approach for Repeat Protein Design

    PubMed Central

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Jayaraman, Seetharaman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T.; Hunt, John; Baker, David

    2014-01-01

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1 Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  13. A Pseudopotential Approach to Compute Thermodynamic Properties of Liquid Semiconductors

    NASA Astrophysics Data System (ADS)

    Prajapati, Anand; Thakor, Pankaj; Sonvane, Yogesh

    2015-03-01

    This paper deals with the theoretical approach for calculating the thermodynamical properties viz. Enthalpy(E),Entropy(S) and Helmholtz free energy(F) of some liquid semiconductors (Si, Ga, Ge, In, Sn, Tl, Bi, As, Se, Te and Sb). The Gibbs-Bogoliubov(GB) variational method is applied to compute the thermodynamical properties. Our well established model potential is used to define the electron-ion interaction. Charged Hard Sphere (CHS) reference system is used to describe the structural contribution to the Helmholtz free energy in the liquid phase. Local field correction function proposed by Farid et al is adopted to see the screening effect. Lastly, our newly constructed model potential is an effective one to produce the data of thermodynamical properties of some liquid semiconductor.

  14. Systems approaches to computational modeling of the oral microbiome

    PubMed Central

    Dimitrov, Dimiter V.

    2013-01-01

    Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet—oral microbiome—host mucosal transcriptome interactions. In particular, we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, and human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders. PMID:23847548

  15. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors.

    PubMed

    Gayvert, Kaitlyn M; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark A; Tatonetti, Nicholas P; Rickman, David S; Elemento, Olivier

    2016-06-14

    Mutations in transcription factor (TF) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a computational drug-repositioning approach for targeting TF activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions, and a global drug-protein network analysis supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently overexpressed oncogenic TF, predicted that dexamethasone would inhibit ERG activity. Dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of electronic medical record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy for identifying drugs that specifically modulate TF activity. PMID:27264179

  16. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  17. A general computational approach for repeat protein design.

    PubMed

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Seetharaman, Jayaraman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T; Hunt, John; Baker, David

    2015-01-30

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  18. Computational Approach to Seasonal Changes of Living Leaves

    PubMed Central

    Wu, Dong-Yan

    2013-01-01

    This paper proposes a computational approach to seasonal changes of living leaves by combining the geometric deformations and textural color changes. The geometric model of a leaf is generated by triangulating the scanned image of a leaf using an optimized mesh. The triangular mesh of the leaf is deformed by the improved mass-spring model, while the deformation is controlled by setting different mass values for the vertices on the leaf model. In order to adaptively control the deformation of different regions in the leaf, the mass values of vertices are set to be in proportion to the pixels' intensities of the corresponding user-specified grayscale mask map. The geometric deformations as well as the textural color changes of a leaf are used to simulate the seasonal changing process of leaves based on Markov chain model with different environmental parameters including temperature, humidness, and time. Experimental results show that the method successfully simulates the seasonal changes of leaves. PMID:23533545

  19. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  20. Computational Diagnostic: A Novel Approach to View Medical Data.

    SciTech Connect

    Mane, K. K.; Börner, K.

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious choices about patient

  1. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  2. Ab initio methods for nuclear properties - a computational physics approach

    NASA Astrophysics Data System (ADS)

    Maris, Pieter

    2011-04-01

    A microscopic theory for the structure and reactions of light nuclei poses formidable challenges for high-performance computing. Several ab-initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab-initio no-core full configuration (NCFC) approach is based on basis space expansion methods and uses Slater determinants of single-nucleon basis functions to express the nuclear wave function. In this approach, the quantum many-particle problem becomes a large sparse matrix eigenvalue problem. The eigenvalues of this matrix give us the binding energies, and the corresponding eigenvectors the nuclear wave functions. These wave functions can be employed to evaluate experimental quantities. In order to reach numerical convergence for fundamental problems of interest, the matrix dimension often exceeds 1 billion, and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. I discuss different strategies for distributing and solving this large sparse matrix on current multicore computer architectures, including methods to deal with with memory bottleneck. Several of these strategies have been implemented in the code MFDn, which is a parallel fortran code for nuclear structure calculations. I will show scaling behavior and compare the performance of the pure MPI version with the hybrid MPI/OpenMP code on Cray XT4 and XT5 platforms. For large core counts (typically 5,000 and above), the hybrid version is more efficient than pure MPI. With this code, we have been able to predict properties of the unstable nucleus 14F, which have since been confirmed by experiments. I will also give an overview of other recent results for nuclei in the A = 6 to 16 range with 2- and 3-body interactions. Supported in part by US DOE Grant DE-FC02-09ER41582.

  3. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  4. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. PMID:27079692

  5. Combined use of computed tomography and the lattice-Boltzmann method to investigate the influence of pore geometry of porous media on the permeability tensor

    NASA Astrophysics Data System (ADS)

    Striblet, J. C.; Rush, L.; Floyd, M.; Porter, M. L.; Al-Raoush, R. I.

    2011-12-01

    The objective of this work was to investigate the impact of pore geometry of porous media on the permeability tensor. High-resolution, three-dimensional maps of natural sand systems, comprising a range of grain sizes and shapes were obtained using Synchrotron microtomography. The lattice-Boltzmann (LB) method was used to simulate saturated flow through these packs to characterize the impact of particle shape on the permeability tensor. LB computations of permeability tensor and their dependency on the internal structure of porous media will be presented and discussed.

  6. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  7. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  8. A Computational Approach to Finding Novel Targets for Existing Drugs

    PubMed Central

    Li, Yvonne Y.; An, Jianghong; Jones, Steven J. M.

    2011-01-01

    Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM), suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects. PMID:21909252

  9. Computational approaches to protein inference in shotgun proteomics.

    PubMed

    Li, Yong Fuga; Radivojac, Predrag

    2012-01-01

    Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programming and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300

  10. Computational approaches to protein inference in shotgun proteomics

    PubMed Central

    2012-01-01

    Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300

  11. Computer vision approach for ultrasound Doppler angle estimation.

    PubMed

    Saad, Ashraf A; Loupas, Thanasis; Shapiro, Linda G

    2009-12-01

    Doppler ultrasound is an important noninvasive diagnostic tool for cardiovascular diseases. Modern ultrasound imaging systems utilize spectral Doppler techniques for quantitative evaluation of blood flow velocities, and these measurements play a crucial rule in the diagnosis and grading of arterial stenosis. One drawback of Doppler-based blood flow quantification is that the operator has to manually specify the angle between the Doppler ultrasound beam and the vessel orientation, which is called the Doppler angle, in order to calculate flow velocities. In this paper, we will describe a computer vision approach to automate the Doppler angle estimation. Our approach starts with the segmentation of blood vessels in ultrasound color Doppler images. The segmentation step is followed by an estimation technique for the Doppler angle based on a skeleton representation of the segmented vessel. We conducted preliminary clinical experiments to evaluate the agreement between the expert operator's angle specification and the new automated method. Statistical regression analysis showed strong agreement between the manual and automated methods. We hypothesize that the automation of the Doppler angle will enhance the workflow of the ultrasound Doppler exam and achieve more standardized clinical outcome. PMID:18488268

  12. Computational approach in estimating the need of ditch network maintenance

    NASA Astrophysics Data System (ADS)

    Lauren, Ari; Hökkä, Hannu; Launiainen, Samuli; Palviainen, Marjo; Repo, Tapani; Leena, Finer; Piirainen, Sirpa

    2015-04-01

    Ditch network maintenance (DNM), implemented annually in 70 000 ha area in Finland, is the most controversial of all forest management practices. Nationwide, it is estimated to increase the forest growth by 1…3 million m3 per year, but simultaneously to cause 65 000 tons export of suspended solids and 71 tons of phosphorus (P) to water courses. A systematic approach that allows simultaneous quantification of the positive and negative effects of DNM is required. Excess water in the rooting zone slows the gas exchange and decreases biological activity interfering with the forest growth in boreal forested peatlands. DNM is needed when: 1) the excess water in the rooting zone restricts the forest growth before the DNM, and 2) after the DNM the growth restriction ceases or decreases, and 3) the benefits of DNM are greater than the caused adverse effects. Aeration in the rooting zone can be used as a drainage criterion. Aeration is affected by several factors such as meteorological conditions, tree stand properties, hydraulic properties of peat, ditch depth, and ditch spacing. We developed a 2-dimensional DNM simulator that allows the user to adjust these factors and to evaluate their effect on the soil aeration at different distance from the drainage ditch. DNM simulator computes hydrological processes and soil aeration along a water flowpath between two ditches. Applying daily time step it calculates evapotranspiration, snow accumulation and melt, infiltration, soil water storage, ground water level, soil water content, air-filled porosity and runoff. The model performance in hydrology has been tested against independent high frequency field monitoring data. Soil aeration at different distance from the ditch is computed under steady-state assumption using an empirical oxygen consumption model, simulated air-filled porosity, and diffusion coefficient at different depths in soil. Aeration is adequate and forest growth rate is not limited by poor aeration if the

  13. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  14. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  15. Analysis of the effect of cone-beam geometry and test object configuration on the measurement accuracy of a computed tomography scanner used for dimensional measurement

    NASA Astrophysics Data System (ADS)

    Kumar, Jagadeesha; Attridge, Alex; Wood, P. K. C.; Williams, Mark A.

    2011-03-01

    Industrial x-ray computed tomography (CT) scanners are used for non-contact dimensional measurement of small, fragile components and difficult-to-access internal features of castings and mouldings. However, the accuracy and repeatability of measurements are influenced by factors such as cone-beam system geometry, test object configuration, x-ray power, material and size of test object, detector characteristics and data analysis methods. An attempt is made in this work to understand the measurement errors of a CT scanner over the complete scan volume, taking into account only the errors in system geometry and the object configuration within the scanner. A cone-beam simulation model is developed with the radiographic image projection and reconstruction steps. A known amount of errors in geometrical parameters were introduced in the model to understand the effect of geometry of the cone-beam CT system on measurement accuracy for different positions, orientations and sizes of the test object. Simulation analysis shows that the geometrical parameters have a significant influence on the dimensional measurement at specific configurations of the test object. Finally, the importance of system alignment and estimation of correct parameters for accurate CT measurements is outlined based on the analysis.

  16. Serial analysis of lumen geometry and hemodynamics in human arteriovenous fistula for hemodialysis using magnetic resonance imaging and computational fluid dynamics

    PubMed Central

    He, Yong; Terry, Christi M.; Nguyen, Cuong; Berceli, Scott A.; Shiu, Yan-Ting E.; Cheung, Alfred K.

    2014-01-01

    The arteriovenous fistula (AVF) is the preferred form of vascular access for maintenance hemodialysis, but it often fails to mature to become clinically usable, likely due to aberrant hemodynamic forces. A robust pipeline for serial assessment of hemodynamic parameters and subsequent lumen cross-sectional area changes has been developed and applied to a data set from contrast-free MRI of a dialysis patient’s AVF collected over a period of months after AVF creation surgery. Black-blood MRI yielded images of AVF lumen geometry, while cine phase-contrast MRI provided volumetric flow rates at the in-flow and out-flow locations. Lumen geometry and flow rates were used as inputs for computational fluid dynamic (CFD) modeling to provide serial wall shear stress (WSS), WSS gradient, and oscillatory shear index profiles. The serial AVF lumen geometries were co-registered at 1-mm intervals using respective lumen centerlines, with the anastomosis as an anatomical landmark. Lumen enlargement was limited at the vein region near the anastomosis and a downstream vein valve, potentially attributed to a physical inhibition of wall expansion at those sites. This work is the first serial and detail study of lumen and hemodynamic changes in human AVF using MRI and CFD. This novel protocol will be used for a multicenter prospective study to identify critical hemodynamic factors that contribute to AVF maturation failure. PMID:23122945

  17. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen. PMID:24622557

  18. Approach to reduce the computational image processing requirements for a computer vision system using sensor preprocessing and the Hotelling transform

    NASA Astrophysics Data System (ADS)

    Schei, Thomas R.; Wright, Cameron H. G.; Pack, Daniel J.

    2005-03-01

    We describe a new development approach to computer vision for a compact, low-power, real-time system such as mobile robots. We take advantage of preprocessing in a biomimetic vision sensor and employ a computational strategy using subspace methods and the Hotelling transform in an effort to reduce the computational imaging load. The combination, while providing an overall reduction in the computational imaging requirements, is not optimized to each other and requires additional investigation.

  19. Proof in Transformation Geometry

    ERIC Educational Resources Information Center

    Bell, A. W.

    1971-01-01

    The first of three articles showing how inductively-obtained results in transformation geometry may be organized into a deductive system. This article discusses two approaches to enlargement (dilatation), one using coordinates and the other using synthetic methods. (MM)

  20. A task-specific approach to computational imaging system design

    NASA Astrophysics Data System (ADS)

    Ashok, Amit

    The traditional approach to imaging system design places the sole burden of image formation on optical components. In contrast, a computational imaging system relies on a combination of optics and post-processing to produce the final image and/or output measurement. Therefore, the joint-optimization (JO) of the optical and the post-processing degrees of freedom plays a critical role in the design of computational imaging systems. The JO framework also allows us to incorporate task-specific performance measures to optimize an imaging system for a specific task. In this dissertation, we consider the design of computational imaging systems within a JO framework for two separate tasks: object reconstruction and iris-recognition. The goal of these design studies is to optimize the imaging system to overcome the performance degradations introduced by under-sampled image measurements. Within the JO framework, we engineer the optical point spread function (PSF) of the imager, representing the optical degrees of freedom, in conjunction with the post-processing algorithm parameters to maximize the task performance. For the object reconstruction task, the optimized imaging system achieves a 50% improvement in resolution and nearly 20% lower reconstruction root-mean-square-error (RMSE) as compared to the un-optimized imaging system. For the iris-recognition task, the optimized imaging system achieves a 33% improvement in false rejection ratio (FRR) for a fixed alarm ratio (FAR) relative to the conventional imaging system. The effect of the performance measures like resolution, RMSE, FRR, and FAR on the optimal design highlights the crucial role of task-specific design metrics in the JO framework. We introduce a fundamental measure of task-specific performance known as task-specific information (TSI), an information-theoretic measure that quantifies the information content of an image measurement relevant to a specific task. A variety of source-models are derived to illustrate

  1. On the Geometry of Space, Time, Energy, and Mass: Empirical Validation of the Computational Unified Field Theory

    NASA Astrophysics Data System (ADS)

    Bentwich, Jonathan

    The principle contradiction that exists between Quantum Mechanics and Relativity Theory constitutes the biggest unresolved enigma in modern Science. To date, none of the candidate theory of everything (TOE) models received any satisfactory empirical validation. A new hypothetical Model called: the `Computational Unified Field Theory' (CUFT) was discovered over the past three years. In this paper it will be shown that CUFT is capable of resolving the key theoretical inconsistencies between quantum and relativistic models. Additionally, the CUFT fully integrates the four physical parameters of space, time, energy and mass as secondary computational features of a singular universal computational principle (UCP) which produces the entire physical universe as an extremely rapid series of spatially exhaustive `Universal Simultaneous Computational Frames' (USCF) embodied within a novel `Universal Computational Formula' (UCF). An empirical validation of the CUFT as a satisfactory TOE is given based on the recently discovered `Proton Radius Puzzle', which confirms one of the CUFT `differential-critical predictions' distinguishing it from both quantum and relativistic models.

  2. Transmission electron microscope in situ fatigue experiments: a computer-control approach.

    PubMed

    Vecchio, K S; Hunt, J A; Williams, D B

    1991-03-01

    A computer-control procedure was developed to facilitate in situ fatigue experiments within an intermediate voltage transmission electron microscope using a goniometer-type straining holder. The procedure was designed to allow sine-wave tension-tension cyclic loading of a microfatigue specimen similar in geometry to a center-crack panel fatigue specimen. Computer control allows greater freedom for the operator to control the experiments while providing better reproducibility from one test to another. Further development of this procedure is possible by coupling this computer-control technique with computer-controlled stage motion and digitized TV imaging. PMID:2045966

  3. Computational approaches to selecting and optimising targets for structural biology.

    PubMed

    Overton, Ian M; Barton, Geoffrey J

    2011-09-01

    Selection of protein targets for study is central to structural biology and may be influenced by numerous factors. A key aim is to maximise returns for effort invested by identifying proteins with the balance of biophysical properties that are conducive to success at all stages (e.g. solubility, crystallisation) in the route towards a high resolution structural model. Selected targets can be optimised through construct design (e.g. to minimise protein disorder), switching to a homologous protein, and selection of experimental methodology (e.g. choice of expression system) to prime for efficient progress through the structural proteomics pipeline. Here we discuss computational techniques in target selection and optimisation, with more detailed focus on tools developed within the Scottish Structural Proteomics Facility (SSPF); namely XANNpred, ParCrys, OB-Score (target selection) and TarO (target optimisation). TarO runs a large number of algorithms, searching for homologues and annotating the pool of possible alternative targets. This pool of putative homologues is presented in a ranked, tabulated format and results are also visualised as an automatically generated and annotated multiple sequence alignment. The target selection algorithms each predict the propensity of a selected protein target to progress through the experimental stages leading to diffracting crystals. This single predictor approach has advantages for target selection, when compared with an approach using two or more predictors that each predict for success at a single experimental stage. The tools described here helped SSPF achieve a high (21%) success rate in progressing cloned targets to diffraction-quality crystals. PMID:21906678

  4. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    NASA Astrophysics Data System (ADS)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  5. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications.

    PubMed

    Gordon, John C; Fenley, Andrew T; Onufriev, Alexey

    2008-08-21

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Huckel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/mid R:emid R: approximately kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/mid R:emid R:. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A) to generate the

  6. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications

    NASA Astrophysics Data System (ADS)

    Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey

    2008-08-01

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy

  7. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations. PMID:17946330

  8. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  9. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors

    PubMed Central

    Gayvert, Kaitlyn; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark; Tatonetti, Nicholas P.; Rickman, David; Elemento, Olivier

    2016-01-01

    Summary Mutations in transcription factors (TFs) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a Computational drug-Repositioning Approach For Targeting Transcription factor activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions and a global drug-protein network analysis further supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently over-expressed oncogenic TF predicted that dexamethasone would inhibit ERG activity. Indeed, dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of Electronic Medical Record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy to identify drugs that specifically modulate TF activity. PMID:27264179

  10. A systems approach to computer-based training

    NASA Technical Reports Server (NTRS)

    Drape, Gaylen W.

    1994-01-01

    This paper describes the hardware and software systems approach used in the Automated Recertification Training System (ARTS), a Phase 2 Small Business Innovation Research (SBIR) project for NASA Kennedy Space Center (KSC). The goal of this project is to optimize recertification training of technicians who process the Space Shuttle before launch by providing computer-based training courseware. The objectives of ARTS are to implement more effective CBT applications identified through a need assessment process and to provide an ehanced courseware production system. The system's capabilities are demonstrated by using five different pilot applications to convert existing classroom courses into interactive courseware. When the system is fully implemented at NASA/KSC, trainee job performance will improve and the cost of courseware development will be lower. Commercialization of the technology developed as part of this SBIR project is planned for Phase 3. Anticipated spin-off products include custom courseware for technical skills training and courseware production software for use by corporate training organizations of aerospace and other industrial companies.

  11. Lexical is as lexical does: computational approaches to lexical representation

    PubMed Central

    Woollams, Anna M.

    2015-01-01

    In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204

  12. Quasi-relativistic modeltotential approach. Spin-orbit effects on energies and geometries of several di- and tri-atomic molecules

    NASA Astrophysics Data System (ADS)

    Hafner, P.; Habitz, P.; Ishikawa, Y.; Wechsel-Trakowski, E.; Schwarz, W. H. E.

    1981-06-01

    Calculations on ground and valence-excited states of Au +2, Tl 2 and Pb 2, and on the ground states of HgCl 2, PbCl 2 and PbH 2 have teen performed within the Kramers-restricteu self-consistent-field approach using a quasi-relativitistic model-potential hamiltonian. The influence of spin—orbit coupling on molecular orbitals, bond energies and geometries is discussed.

  13. Computational approaches to stochastic systems in physics and biology

    NASA Astrophysics Data System (ADS)

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  14. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  15. The Computer in Library Education: One School's Approach.

    ERIC Educational Resources Information Center

    Drott, M. Carl

    The increasing presence and use of computers in libraries has brought about the more frequent introduction of computers and their uses into library education. The Drexel University Graduate School of Library Science has introduced the computer into the curriculum more through individual experimentation and innovation than by planned development.…

  16. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    NASA Astrophysics Data System (ADS)

    Granovsky, Alexander A.

    2015-12-01

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  17. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    SciTech Connect

    Granovsky, Alexander A.

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  18. Computer Tutors: An Innovative Approach to Computer Literacy. Part I: The Early Stages.

    ERIC Educational Resources Information Center

    Targ, Joan

    1981-01-01

    Describes the development of the Computer Tutor project in Palo Alto, California, a computer literacy pilot program in which tutors are used to teach high school students and other interested persons computer programing. (JJD)

  19. Uncertainty budget for a whole body counter in the scan geometry and computer simulation of the calibration phantoms.

    PubMed

    Schlagbauer, M; Hrnecek, E; Rollet, S; Fischer, H; Brandl, A; Kindl, P

    2007-01-01

    At the Austrian Research Centers Seibersdorf (ARCS), a whole body counter (WBC) in the scan geometry is used to perform routine measurements for the determination of radioactive intake of workers. The calibration of the WBC is made using bottle phantoms with a homogeneous activity distribution. The same calibration procedures have been simulated using Monte Carlo N-Particle (MCNP) code and FLUKA and the results of the full energy peak efficiencies for eight energies and five phantoms have been compared with the experimental results. The deviation between experiment and simulation results is within 10%. Furthermore, uncertainty budget evaluations have been performed to find out which parameters make substantial contributions to these differences. Therefore, statistical errors of the Monte Carlo simulation, uncertainties in the cross section tables and differences due to geometrical considerations have been taken into account. Comparisons between these results and the one with inhomogeneous distribution, for which the activity is concentrated only in certain parts of the body (such as head, lung, arms and legs), have been performed. The maximum deviation of 43% from the homogeneous case has been found when the activity is concentrated on the arms. PMID:17656442

  20. Novel Approaches in Astrocyte Protection: from Experimental Methods to Computational Approaches.

    PubMed

    Garzón, Daniel; Cabezas, Ricardo; Vega, Nelson; Ávila-Rodriguez, Marcos; Gonzalez, Janneth; Gómez, Rosa Margarita; Echeverria, Valentina; Aliev, Gjumrakch; Barreto, George E

    2016-04-01

    Astrocytes are important for normal brain functioning. Astrocytes are metabolic regulators of the brain that exert many functions such as the preservation of blood-brain barrier (BBB) function, clearance of toxic substances, and generation of antioxidant molecules and growth factors. These functions are fundamental to sustain the function and survival of neurons and other brain cells. For these reasons, the protection of astrocytes has become relevant for the prevention of neuronal death during brain pathologies such as Parkinson's disease, Alzheimer's disease, stroke, and other neurodegenerative conditions. Currently, different strategies are being used to protect the main astrocytic functions during neurological diseases, including the use of growth factors, steroid derivatives, mesenchymal stem cell paracrine factors, nicotine derivatives, and computational biology tools. Moreover, the combined use of experimental approaches with bioinformatics tools such as the ones obtained through system biology has allowed a broader knowledge in astrocytic protection both in normal and pathological conditions. In the present review, we highlight some of these recent paradigms in assessing astrocyte protection using experimental and computational approaches and discuss how they could be used for the study of restorative therapies for the brain in pathological conditions. PMID:26803310

  1. A computational intelligence approach to the Mars Precision Landing problem

    NASA Astrophysics Data System (ADS)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  2. Mutations that Cause Human Disease: A Computational/Experimental Approach

    SciTech Connect

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which can be used to

  3. A Soft Computing Approach to Kidney Diseases Evaluation.

    PubMed

    Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique

    2015-10-01

    Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the

  4. Developing framework to constrain the geometry of the seismic rupture plane on subduction interfaces a priori - A probabilistic approach

    USGS Publications Warehouse

    Hayes, G.P.; Wald, D.J.

    2009-01-01

    A key step in many earthquake source inversions requires knowledge of the geometry of the fault surface on which the earthquake occurred. Our knowledge of this surface is often uncertain, however, and as a result fault geometry misinterpretation can map into significant error in the final temporal and spatial slip patterns of these inversions. Relying solely on an initial hypocentre and CMT mechanism can be problematic when establishing rupture characteristics needed for rapid tsunami and ground shaking estimates. Here, we attempt to improve the quality of fast finite-fault inversion results by combining several independent and complementary data sets to more accurately constrain the geometry of the seismic rupture plane of subducting slabs. Unlike previous analyses aimed at defining the general form of the plate interface, we require mechanisms and locations of the seismicity considered in our inversions to be consistent with their occurrence on the plate interface, by limiting events to those with well-constrained depths and with CMT solutions indicative of shallow-dip thrust faulting. We construct probability density functions about each location based on formal assumptions of their depth uncertainty and use these constraints to solve for the ‘most-likely’ fault plane. Examples are shown for the trench in the source region of the Mw 8.6 Southern Sumatra earthquake of March 2005, and for the Northern Chile Trench in the source region of the November 2007 Antofagasta earthquake. We also show examples using only the historic catalogues in regions without recent great earthquakes, such as the Japan and Kamchatka Trenches. In most cases, this method produces a fault plane that is more consistent with all of the data available than is the plane implied by the initial hypocentre and CMT mechanism. Using the aggregated data sets, we have developed an algorithm to rapidly determine more accurate initial fault plane geometries for source inversions of future

  5. Computational Study on Subdural Cortical Stimulation - The Influence of the Head Geometry, Anisotropic Conductivity, and Electrode Configuration

    PubMed Central

    Kim, Donghyeon; Seo, Hyeon; Kim, Hyoung-Ihl; Jun, Sung Chan

    2014-01-01

    Subdural cortical stimulation (SuCS) is a method used to inject electrical current through electrodes beneath the dura mater, and is known to be useful in treating brain disorders. However, precisely how SuCS must be applied to yield the most effective results has rarely been investigated. For this purpose, we developed a three-dimensional computational model that represents an anatomically realistic brain model including an upper chest. With this computational model, we investigated the influence of stimulation amplitudes, electrode configurations (single or paddle-array), and white matter conductivities (isotropy or anisotropy). Further, the effects of stimulation were compared with two other computational models, including an anatomically realistic brain-only model and the simplified extruded slab model representing the precentral gyrus area. The results of voltage stimulation suggested that there was a synergistic effect with the paddle-array due to the use of multiple electrodes; however, a single electrode was more efficient with current stimulation. The conventional model (simplified extruded slab) far overestimated the effects of stimulation with both voltage and current by comparison to our proposed realistic upper body model. However, the realistic upper body and full brain-only models demonstrated similar stimulation effects. In our investigation of the influence of anisotropic conductivity, model with a fixed ratio (1∶10) anisotropic conductivity yielded deeper penetration depths and larger extents of stimulation than others. However, isotropic and anisotropic models with fixed ratios (1∶2, 1∶5) yielded similar stimulation effects. Lastly, whether the reference electrode was located on the right or left chest had no substantial effects on stimulation. PMID:25229673

  6. Geometry Career Unit: Junior High.

    ERIC Educational Resources Information Center

    Jensen, Daniel

    The guide, the product of an exemplary career education program for junior high school students, was developed to show how geometry can be applied to real-life career-oriented areas and to bring a practical approach to the teaching of geometry. It is designed to show how some of the theorems or postulates in geometry are used in different careers.…

  7. Numerical simulation of polymer flows: A parallel computing approach

    SciTech Connect

    Aggarwal, R.; Keunings, R.; Roux, F.X.

    1993-12-31

    We present a parallel algorithm for the numerical simulation of viscoelastic fluids on distributed memory computers. The algorithm has been implemented within a general-purpose commercial finite element package used in polymer processing applications. Results obtained on the Intel iPSC/860 computer demonstrate high parallel efficiency in complex flow problems. However, since the computational load is unknown a priori, load balancing is a challenging issue. We have developed an adaptive allocation strategy which dynamically reallocates the work load to the processors based upon the history of the computational procedure. We compare the results obtained with the adaptive and static scheduling schemes.

  8. A new approach to tag design in dolphin telemetry: Computer simulations to minimise deleterious effects

    NASA Astrophysics Data System (ADS)

    Pavlov, V. V.; Wilson, R. P.; Lucke, K.

    2007-02-01

    Remote-sensors and transmitters are powerful devices for studying cetaceans at sea. However, despite substantial progress in microelectronics and miniaturisation of systems, dolphin tags are imperfectly designed; additional drag from tags increases swim costs, compromises swimming capacity and manoeuvrability, and leads to extra loads on the animal's tissue. We propose a new approach to tag design, elaborating basic principles and incorporating design stages to minimise device effects by using computer-aided design. Initially, the operational conditions of the device are defined by quantifying the shape, hydrodynamics and range of the natural deformation of the dolphin body at the tag attachment site (such as close to the dorsal fin). Then, parametric models of both of the dorsal fin and a tag are created using the derived data. The link between parameters of the fin and a tag model allows redesign of tag models according to expected changes of fin geometry (difference in fin shape related with species, sex, and age peculiarities, simulation of the bend of the fin during manoeuvres). A final virtual modelling stage uses iterative improvement of a tag model in a computer fluid dynamics (CFD) environment to enhance tag performance. This new method is considered as a suitable tool of tag design before creation of the physical model of a tag and testing with conventional wind/water tunnel technique. Ultimately, tag materials are selected to conform to the conditions identified by the modelling process and thus help create a physical model of a tag, which should minimise its impact on the animal carrier and thus increase the reliability and quality of the data obtained.

  9. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  10. Computers and the Humanities Courses: Philosophical Bases and Approach.

    ERIC Educational Resources Information Center

    Ide, Nancy M.

    1987-01-01

    Discusses a Vassar College workshop and the debate it generated over the depth and breadth of computer knowledge needed by humanities students. Describes two positions: the "Holistic View," which emphasizes the understanding of the formal methods of computer implementation; and the "Expert Users View," which sees the humanist as a "user" of…

  11. Analysis of Children's Computational Errors: A Qualitative Approach

    ERIC Educational Resources Information Center

    Engelhardt, J. M.

    1977-01-01

    This study was designed to replicate and extend Roberts' (1968) efforts at classifying computational errors. 198 elementary school students were administered an 84-item arithmetic computation test. Eight types of errors were described which led to several tentative generalizations. (Editor/RK)

  12. An HCI Approach to Computing in the Real World

    ERIC Educational Resources Information Center

    Yardi, Sarita; Krolikowski, Pamela; Marshall, Taneshia; Bruckman, Amy

    2008-01-01

    We describe the implementation of a six-week course to teach Human-Computer Interaction (HCI) to high school students. Our goal was to explore the potential of HCI in motivating students to pursue future studies in related computing fields. Participants in our course learned to make connections between the types of technology they use in their…

  13. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.

    PubMed

    Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  14. Assessment of tissue optical parameters in a spherical geometry using three different optical spectroscopy methods: comparison based on a theoretical approach

    NASA Astrophysics Data System (ADS)

    Vaudelle, F.; Askoura, M.; L'Huillier, J. P.

    2015-07-01

    The non-invasive research of information inside biological tissues can be made by means of setups using continuous, time-dependent or frequency modulated light sources, which emit in the visible or near-infrared range. Moreover, the biological structures such as brain, breast or fruits, can be regarded as closer to a spherical shape than a slab. This paper focus on the retrieval of tissue optical parameters in a spherical geometry using fittings with analytical solutions adapted for semi-infinite geometry. The data were generated using three different optical spectroscopy methods: frequency-resolved, spatially-resolved, and time-resolved modes. Simulations based on a Monte Carlo code were performed on homogeneous spheres, with 18 spaced detectors located on their boundary. First, data are examined in the frequency domain. Second, they are treated with optimization algorithms to assess the optical coefficients. The computations show that the spatially-resolved measurements are often more robust than those related to the frequency-resolved mode. In the temporal domain, errors on the estimates are also exhibited with the fitting by the Fourier transform of a solution based on the semi-infinite geometry. Furthermore, when the analytical solution is modified by taking into account the spherical shape, the retrieval of the coefficients is improved.

  15. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.

    2014-03-01

    Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.

  16. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  17. What Computational Approaches Should be Taught for Physics?

    NASA Astrophysics Data System (ADS)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  18. Reflections on John Monaghan's "Computer Algebra, Instrumentation, and the Anthropological Approach"

    ERIC Educational Resources Information Center

    Blume, Glen

    2007-01-01

    Reactions to John Monaghan's "Computer Algebra, Instrumentation and the Anthropological Approach" focus on a variety of issues related to the ergonomic approach (instrumentation) and anthropological approach to mathematical activity and practice. These include uses of the term technique; several possibilities for integration of the two approaches;…

  19. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  20. Computational representation and hemodynamic characterization of in vivo acquired severe stenotic renal artery geometries using turbulence modeling.

    PubMed

    Kagadis, George C; Skouras, Eugene D; Bourantas, George C; Paraskeva, Christakis A; Katsanos, Konstantinos; Karnabatidis, Dimitris; Nikiforidis, George C

    2008-06-01

    The present study reports on computational fluid dynamics in the case of severe renal artery stenosis (RAS). An anatomically realistic model of a renal artery was reconstructed from CT scans, and used to conduct CFD simulations of blood flow across RAS. The recently developed shear stress transport (SST) turbulence model was pivotally applied in the simulation of blood flow in the region of interest. Blood flow was studied in vivo under the presence of RAS and subsequently in simulated cases before the development of RAS, and after endovascular stent implantation. The pressure gradients in the RAS case were many orders of magnitude larger than in the healthy case. The presence of RAS increased flow resistance, which led to considerably lower blood flow rates. A simulated stent in place of the RAS decreased the flow resistance at levels proportional to, and even lower than, the simulated healthy case without the RAS. The wall shear stresses, differential pressure profiles, and net forces exerted on the surface of the atherosclerotic plaque at peak pulse were shown to be of relevant high distinctiveness, so as to be considered potential indicators of hemodynamically significant RAS. PMID:17714975

  1. A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.

    1991-01-01

    The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.

  2. Computational Approaches for Translational Clinical Research in Disease Progression

    PubMed Central

    McGuire, Mary F.; Iyengar, M. Sriram; Mercer, David W.

    2011-01-01

    Today, there is an ever-increasing amount of biological and clinical data available that could be used to enhance a systems-based understanding of disease progression through innovative computational analysis. In this paper we review a selection of published research regarding computational methodologies, primarily from systems biology, that support translational research from the molecular level to the bedside, with a focus on applications in trauma and critical care. Trauma is the leading cause of mortality in Americans under 45 years of age, and its rapid progression offers both opportunities and challenges for computational analysis of trends in molecular patterns associated with outcomes and therapeutic interventions. This review presents methods and domain-specific examples that may inspire the development of new algorithms and computational methods that utilize both molecular and clinical data for diagnosis, prognosis and therapy in disease progression. PMID:21712727

  3. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  4. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    NASA Astrophysics Data System (ADS)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  5. Constraining Viewing Geometries of Pulsars with Single-Peaked Gamma-ray Profiles Using a Multiwavelength Approach

    NASA Technical Reports Server (NTRS)

    Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.

    2012-01-01

    Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).

  6. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  7. Computer Support for a Systems Approach to Instruction; Problem Statement and Data Entry Techniques.

    ERIC Educational Resources Information Center

    Collins, Eugene A.; Larsen, Dean C.

    The Jefferson County Public School System in Colorado is conducting a study which implements a digital time-shared computer as support for a systems approach to instruction. This study currently involves one elementary school but it will support a total of thirteen schools in the future. The computer support includes computer-generated criterion…

  8. A Computational Approach to Qualitative Analysis in Large Textual Datasets

    PubMed Central

    Evans, Michael S.

    2014-01-01

    In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398

  9. The influence of the heel effect in cone-beam computed tomography: artifacts in standard and novel geometries and their correction

    NASA Astrophysics Data System (ADS)

    Braun, H.; Kyriakou, Y.; Kachelrieß, M.; Kalender, W. A.

    2010-10-01

    For decades, the heel effect has been known to cause an angular dependence of the emitted spectrum of an x-ray tube. In radiography, artifacts were observed and attributed to the heel effect. However, no problems due to the heel effect were discerned in multi-slice computed tomography (MSCT) so far. With flat-detector CT (FDCT), involving larger cone angles and different system geometries, the heel effect might cause new artifacts. These artifacts were analyzed in this paper for system geometries different from the ones widely used nowadays. Simulations and measurements were performed. Simulations included symmetric as well as asymmetric detector layouts and different x-ray tube orientations with respect to the detector plane. The measurements were performed on a micro-CT system in an asymmetric detector layout. Furthermore, an analytical correction scheme is proposed to overcome heel effect artifacts. It was shown that the type of artifact greatly depends on the orientation of the x-ray tube and also on the type of detector alignment (i.e. symmetric or different types of asymmetric alignment). Certain combinations exhibited almost no significant artifact while others greatly influenced the quality of the reconstructed images. The proposed correction scheme showed good results that were further improved when also applying a scatter correction. When designing CT systems, care should be taken when placing the tube and the detector. Orientation of the x-ray tube like in most MSCT systems seems advisable in asymmetric detector layouts. However, a different type of tube orientation can be overcome with suitable correction schemes.

  10. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences.

    PubMed

    Goecks, Jeremy; Nekrutenko, Anton; Taylor, James

    2010-01-01

    Increased reliance on computational approaches in the life sciences has revealed grave concerns about how accessible and reproducible computation-reliant results truly are. Galaxy http://usegalaxy.org, an open web-based platform for genomic research, addresses these problems. Galaxy automatically tracks and manages data provenance and provides support for capturing the context and intent of computational methods. Galaxy Pages are interactive, web-based documents that provide users with a medium to communicate a complete computational analysis. PMID:20738864

  11. Linguistics, Computers, and the Language Teacher. A Communicative Approach.

    ERIC Educational Resources Information Center

    Underwood, John H.

    This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…

  12. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  13. Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.

    ERIC Educational Resources Information Center

    Putnam, A. R.; Duelm, Brian

    This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…

  14. A New Approach: Computer-Assisted Problem-Solving Systems

    ERIC Educational Resources Information Center

    Gok, Tolga

    2010-01-01

    Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…

  15. A "Service-Learning Approach" to Teaching Computer Graphics

    ERIC Educational Resources Information Center

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  16. Computer-Assisted Argument Mapping: A "Rationale" Approach

    ERIC Educational Resources Information Center

    Davies, W. Martin

    2009-01-01

    Computer-Assisted Argument Mapping (CAAM) is a new way of understanding arguments. While still embryonic in its development and application, CAAM is being used increasingly as a training and development tool in the professions and government. Inroads are also being made in its application within education. CAAM claims to be helpful in an…

  17. An algebraic approach to computer program design and memory management

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2008-03-01

    Beginning with an algebra of multi-dimensional arrays and following a set of reduction rules embodying a calculus of array indices, we translate (in a mechanizable way) from the high-level mathematics of any array-based problem and a machine specification to a mathematically-optimized implementation. Raynolds and Mullin introduced the name Conformal Computing,^to describe this process that will be discussed in the context of data transforms such as the Fast Fourier, Wavelet Transforms and QR decomposition. We discuss the discovery that the access patterns of the Wavelet Transform form a sufficiently regular subset of those for our cache-optimized FFT so that we can be assured of achieving similar efficiency improvements to the Wavelet Transform as those that were found for the FFT. We present recent results in which careful attention to reproducible computational experiments in a dedicated/non-shared environment is demonstrated to be essential in order to optimally measure the response of the system (in this case the computer itself is the object of study) so as to be able to optimally tune the algorithm to the numerous cost functions associated with all of the elements of the memory/disk/network hierarchy. ^ The name Conformal Computing is protected: 2003, The Research Foundation, State University of New York.

  18. A Functional Analytic Approach to Computer-Interactive Mathematics

    ERIC Educational Resources Information Center

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…

  19. A topological approach to computer-aided sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Chan, S. P.; Munoz, R. M.

    1971-01-01

    Sensitivities of any arbitrary system are calculated using general purpose digital computer with available software packages for transfer function analysis. Sensitivity shows how element variation within system affects system performance. Signal flow graph illustrates topological system behavior and relationship among parameters in system.

  20. One Instructor's Approach to Computer Assisted Instruction in General Chemistry.

    ERIC Educational Resources Information Center

    DeLorenzo, Ronald

    1982-01-01

    Discusses advantages of using computer-assisted instruction in a college general chemistry course. Advantages include using programs which generate random equations with double arrows (equilibrium systems) or generate alkane structural formula, asking for the correct IUPAC name of the structure. (Author/JN)

  1. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  2. Traditional versus Computer-Mediated Approaches of Teaching Educational Measurement

    ERIC Educational Resources Information Center

    Alkharusi, Hussain; Kazem, Ali; Al-Musawai, Ali

    2010-01-01

    Research suggests that to adequately prepare teachers for the task of classroom assessment, attention should be given to the educational measurement instruction. In addition, the literature indicates that the use of computer-mediated instruction has the potential to affect student knowledge, skills, and attitudes. This study compared the effects…

  3. An Interdisciplinary, Computer-Centered Approach to Active Learning.

    ERIC Educational Resources Information Center

    Misale, Judi M.; And Others

    1996-01-01

    Describes a computer-assisted, interdisciplinary course in decision making developed to promote student participation and critical thinking. Students participate in 20 interactive exercises that utilize and illustrate psychological and economic concepts. Follow-up activities include receiving background information, group discussions, text…

  4. Computational Modelling and Simulation Fostering New Approaches in Learning Probability

    ERIC Educational Resources Information Center

    Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid

    2006-01-01

    Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…

  5. Artificial Intelligence Approaches to Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Bregar, William S.; Farley, Arthur M.

    1980-01-01

    Explores how new, operational models of cognition processing developed in Artificial Intelligence (AI) can be applied in computer assisted instruction (CAI) systems. CAI systems are surveyed in terms of their goals and formalisms, and a model for the development of a tutorial CAI system for algebra problem solving is introduced. (Author)

  6. A Discrete Approach to Computer-Oriented Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    1979-01-01

    Some of the implications and advantages of an instructional approach using results from the calculus of finite differences and finite sums, both for motivation and as tools leading to applications, are discussed. (MP)

  7. National Computing Studies Summit: Open Learning Approaches to Computing Studies--An ACCE Discussion Paper

    ERIC Educational Resources Information Center

    Webb, Ian

    2008-01-01

    In 2005 the Australian Council for Computers in Education (ACCE) was successful in obtaining a grant from National Centre of Science, Information and Communication Technology and Mathematics Education for Rural and Regional Australia (SiMERR) to undertake the Computing Studies Teachers Network Rural and Regional Focus Project. The project had five…

  8. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be

  9. Molecular Geometry.

    ERIC Educational Resources Information Center

    Desseyn, H. O.; And Others

    1985-01-01

    Compares linear-nonlinear and planar-nonplanar geometry through the valence-shell electron pairs repulsion (V.S.E.P.R.), Mulliken-Walsh, and electrostatic force theories. Indicates that although the V.S.E.P.R. theory has more advantages for elementary courses, an explanation of the best features of the different theories offers students a better…

  10. Geometry of trigonal boron coordination sphere in boronic acids derivatives - a bond-valence vector model approach.

    PubMed

    Czerwińska, Karolina; Madura, Izabela D; Zachara, Janusz

    2016-04-01

    The systematic analysis of the geometry of three-coordinate boron in boronic acid derivatives with a common [CBO2] skeleton is presented. The study is based on the bond-valence vector (BVV) model [Zachara (2007). Inorg. Chem. 46, 9760-9767], a simple tool for the identification and quantitative estimation of both steric and electronic factors causing deformations of the coordination sphere. The empirical bond-valence (BV) parameters in the exponential equation [Brown & Altermatt (1985). Acta Cryst. B41, 244-247] rij and b, for B-O and B-C bonds were determined using data deposited in the Cambridge Structural Database. The values obtained amount to rBO = 1.364 Å, bBO = 0.37 Å, rBC = 1.569 Å, bBC = 0.28 Å, and they were further used in the calculation of BVV lengths. The values of the resultant BVV were less than 0.10 v.u. for 95% of the set comprising 897 [CBO2] fragments. Analysis of the distribution of BVV components allowed for the description of subtle in- and out-of plane deviations from the `ideal' (sp(2)) geometry of boron coordination sphere. The distortions specific for distinct groups of compounds such as boronic acids, cyclic and acyclic esters, benzoxaboroles and hemiesters were revealed. In cyclic esters the direction of strains was found to be controlled by the ring size effect. It was shown that the syn or anti location of substituents on O atoms is decisive for the deformations direction for both acids and acyclic esters. The greatest strains were observed in the case of benzoxaboroles which showed the highest deviation from the zero value of the resultant BVV. The out-of-plane distortions, described by the vz component of the resultant BVV, were ascertained to be useful in the identification of weak secondary interactions on the fourth coordination site of the boron centre. PMID:27048726

  11. Common Geometry Module

    Energy Science and Technology Software Center (ESTSC)

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less

  12. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  13. Computational approaches to identify functional genetic variants in cancer genomes

    PubMed Central

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris; Ritchie, Graham R.S.; Creixell, Pau; Karchin, Rachel; Vazquez, Miguel; Fink, J. Lynn; Kassahn, Karin S.; Pearson, John V.; Bader, Gary; Boutros, Paul C.; Muthuswamy, Lakshmi; Ouellette, B.F. Francis; Reimand, Jüri; Linding, Rune; Shibata, Tatsuhiro; Valencia, Alfonso; Butler, Adam; Dronov, Serge; Flicek, Paul; Shannon, Nick B.; Carter, Hannah; Ding, Li; Sander, Chris; Stuart, Josh M.; Stein, Lincoln D.; Lopez-Bigas, Nuria

    2014-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor, but only a minority drive tumor progression. We present the result of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype. PMID:23900255

  14. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  15. Computational Approaches to Viral Evolution and Rational Vaccine Design

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  16. An engineering based approach for hydraulic computations in river flows

    NASA Astrophysics Data System (ADS)

    Di Francesco, S.; Biscarini, C.; Pierleoni, A.; Manciola, P.

    2016-06-01

    This paper presents an engineering based approach for hydraulic risk evaluation. The aim of the research is to identify a criteria for the choice of the simplest and appropriate model to use in different scenarios varying the characteristics of main river channel. The complete flow field, generally expressed in terms of pressure, velocities, accelerations can be described through a three dimensional approach that consider all the flow properties varying in all directions. In many practical applications for river flow studies, however, the greatest changes occur only in two dimensions or even only in one. In these cases the use of simplified approaches can lead to accurate results, with easy to build and faster simulations. The study has been conducted taking in account a dimensionless parameter of channels (ratio of curvature radius and width of the channel (R/B).

  17. A personal-computer-based package for interactive assessment of magnetohydrodynamic equilibrium and poloidal field coil design in axisymmetric toroidal geometry

    SciTech Connect

    Kelleher, W.P. ); Steiner, D. . Dept. of Nuclear Science)

    1989-07-01

    A personal-computer (PC)-based calculational approach assesses magnetohydrodynamic (MHD) equilibrium and poloidal field (PF) coil arrangement in a highly interactive mode, well suited for tokamak scoping studies. The system developed involves a two-step process: the MHD equilibrium is calculated and then a PF coil arrangement, consistent with the equilibrium is determined in an interactive design environment. In this paper the approach is used to examine four distinctly different toroidal configurations: the STARFIRE rector, a spherical torus (ST), the Big Dee, and an elongated tokamak. In these applications the PC-based results are benchmarked against those of a mainframe code for STARFIRE, ST, and Big Dee. The equilibrium and PF coil arrangement calculations obtained with the PC approach agree within a few percent with those obtained with the mainframe code.

  18. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis.

    PubMed

    Fokas, Alexander S; Cole, Daniel J; Ahnert, Sebastian E; Chin, Alex W

    2016-01-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function. PMID:27623708

  19. Controls on cross-sectional geometry of extensional basins, east-central Nevada -- A seismic-stratigraphic approach

    SciTech Connect

    Potter, C.J.; Grow, J.A.; Miller, J.J. )

    1993-04-01

    A 110-km regional seismic profile in east-central Nevada crosses Neogene east-tilted half-grabens in (from west to east) Railroad Valley (RRV), White River Valley, Cave Valley (CV), Muleshoe Valley and Lake Valley. Variations in the internal architecture of these basins may be related to two factors: (1) differences in structural evolution and (2) position of cross section (i.e., the seismic profile) with respect to major depocenters, accommodation zones, and other along-strike transitions in basin geometry. A detailed grid of seismic data in Railroad Valley and proprietary data from other basins show that without adequate three-dimensional seismic control, one should carefully consider factor (2) before generalizing about factor (1). As illustrations, the authors compare cross section derived from the seismic data across RRV (latitude of Grant Canyon) and CV (latitude of Sidehill Pass). In summary, data from RRV and CV illustrate the complexity of broad basin-bounding extensional fault zones and suggest that listric faulting and stratal rotation are characteristic of basin depocenters, whereas translation above planar bounding faults is characteristic of parts of extensional basins that are removed from the depocenter.

  20. Computational Approach to Diarylprolinol-Silyl Ethers in Aminocatalysis.

    PubMed

    Halskov, Kim Søholm; Donslund, Bjarke S; Paz, Bruno Matos; Jørgensen, Karl Anker

    2016-05-17

    Asymmetric organocatalysis has witnessed a remarkable development since its "re-birth" in the beginning of the millenium. In this rapidly growing field, computational investigations have proven to be an important contribution for the elucidation of mechanisms and rationalizations of the stereochemical outcomes of many of the reaction concepts developed. The improved understanding of mechanistic details has facilitated the further advancement of the field. The diarylprolinol-silyl ethers have since their introduction been one of the most applied catalysts in asymmetric aminocatalysis due to their robustness and generality. Although aminocatalytic methods at first glance appear to follow relatively simple mechanistic principles, more comprehensive computational studies have shown that this notion in some cases is deceiving and that more complex pathways might be operating. In this Account, the application of density functional theory (DFT) and other computational methods on systems catalyzed by the diarylprolinol-silyl ethers is described. It will be illustrated how computational investigations have shed light on the structure and reactivity of important intermediates in aminocatalysis, such as enamines and iminium ions formed from aldehydes and α,β-unsaturated aldehydes, respectively. Enamine and iminium ion catalysis can be classified as HOMO-raising and LUMO-lowering activation modes. In these systems, the exclusive reactivity through one of the possible intermediates is often a requisite for achieving high stereoselectivity; therefore, the appreciation of subtle energy differences has been vital for the efficient development of new stereoselective reactions. The diarylprolinol-silyl ethers have also allowed for novel activation modes for unsaturated aldehydes, which have opened up avenues for the development of new remote functionalization reactions of poly-unsaturated carbonyl compounds via di-, tri-, and tetraenamine intermediates and vinylogous iminium ions

  1. Novel Approaches to Adaptive Angular Approximations in Computational Transport

    SciTech Connect

    Marvin L. Adams; Igor Carron; Paul Nelson

    2006-06-04

    The particle-transport equation is notoriously difficult to discretize accurately, largely because the solution can be discontinuous in every variable. At any given spatial position and energy E, for example, the transport solution  can be discontinuous at an arbitrary number of arbitrary locations in the direction domain. Even if the solution is continuous it is often devoid of smoothness. This makes the direction variable extremely difficult to discretize accurately. We have attacked this problem with adaptive discretizations in the angle variables, using two distinctly different approaches. The first approach used wavelet function expansions directly and exploited their ability to capture sharp local variations. The second used discrete ordinates with a spatially varying quadrature set that adapts to the local solution. The first approach is very different from that in today’s transport codes, while the second could conceivably be implemented in such codes. Both approaches succeed in reducing angular discretization error to any desired level. The work described and results presented in this report add significantly to the understanding of angular discretization in transport problems and demonstrate that it is possible to solve this important long-standing problem in deterministic transport. Our results show that our adaptive discrete-ordinates (ADO) approach successfully: 1) Reduces angular discretization error to user-selected “tolerance” levels in a variety of difficult test problems; 2) Achieves a given error with significantly fewer unknowns than non-adaptive discrete ordinates methods; 3) Can be implemented within standard discrete-ordinates solution techniques, and thus could generate a significant impact on the field in a relatively short time. Our results show that our adaptive wavelet approach: 1) Successfully reduces the angular discretization error to arbitrarily small levels in a variety of difficult test problems, even when using the

  2. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology

    PubMed Central

    Fong, Stephen S.

    2014-01-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design. PMID:25379141

  3. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    NASA Astrophysics Data System (ADS)

    Sathish, Kuppani; RamaMohan Reddy, A.

    2016-06-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  4. Target Detection Using Fractal Geometry

    NASA Technical Reports Server (NTRS)

    Fuller, J. Joseph

    1991-01-01

    The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.

  5. A complex systems approach to computational molecular biology

    SciTech Connect

    Lapedes, A. |

    1993-09-01

    We report on the containing research program at Santa Fe Institute that applies complex systems methodology to computational molecular biology. Two aspects are stressed here are the use of co-evolving adaptive neutral networks for determining predictable protein structure classifications, and the use of information theory to elucidate protein structure and function. A ``snapshot`` of the current state of research in these two topics is presented, representing the present state of two major research thrusts in the program of Genetic Data and Sequence Analysis at the Santa Fe Institute.

  6. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  7. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  8. Dark Geometry

    NASA Astrophysics Data System (ADS)

    Cembranos, J. A. R.; Dobado, A.; Maroto, A. L.

    Extra-dimensional theories contain additional degrees of freedom related to the geometry of the extra space which can be interpreted as new particles. Such theories allow to reformulate most of the fundamental problems of physics from a completely different point of view. In this essay, we concentrate on the brane fluctuations which are present in brane-worlds, and how such oscillations of the own space-time geometry along curved extra dimensions can help to resolve the Universe missing mass problem. The energy scales involved in these models are low compared to the Planck scale, and this means that some of the brane fluctuations distinctive signals could be detected in future colliders and in direct or indirect dark matter searches.

  9. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  10. Unit cell geometry of 3-D braided structures

    NASA Technical Reports Server (NTRS)

    Du, Guang-Wu; Ko, Frank K.

    1993-01-01

    The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.

  11. One-loop kink mass shifts: A computational approach

    NASA Astrophysics Data System (ADS)

    Alonso Izquierdo, A.; Guilarte, J. Mateos

    2011-11-01

    In this paper we develop a procedure to compute the one-loop quantum correction to the kink masses in generic (1+1)-dimensional one-component scalar field theoretical models. The procedure uses the generalized zeta function regularization method helped by the Gilkey-de Witt asymptotic expansion of the heat function via Mellin's transform. We find a formula for the one-loop kink mass shift that depends only on the part of the energy density with no field derivatives, evaluated by means of a symbolic software algorithm that automates the computation. The improved algorithm with respect to earlier work in this subject has been tested in the sine-Gordon and λ(ϕ)24 models. The quantum corrections of the sG-soliton and λ(-kink masses have been estimated with a relative error of 0.00006% and 0.00007% respectively. Thereafter, the algorithm is applied to other models. In particular, an interesting one-parametric family of double sine-Gordon models interpolating between the ordinary sine-Gordon and a re-scaled sine-Gordon model is addressed. Another one-parametric family, in this case of ϕ models, is analyzed. The main virtue of our procedure is its versatility: it can be applied to practically any type of relativistic scalar field models supporting kinks.

  12. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  13. [Informational approach to radiology department by end user computing].

    PubMed

    Yamaguchi, Masaya; Katoh, Tsutomu; Murota, Makiko; Kohchi, Hideyuki; Miura, Shinji; Ishikawa, Midori; Ohhiro, Mika

    2009-04-20

    In recent years, due to the advanced computerization of medical institutions, systems such as radiology information system (RIS) and reporting have been used extensively also at radiology departments. However, the introduction of these systems will need a great amount of money, and the systems are not yet introduced in our hospital. On the contrary, thanks to the sophistication and price reduction of personal computers (PCs), there is now found a rapid expansion of end user computing (EUC) in which users of a system actively build and manage the system of their duties. Under these circumstances, in order to assist the duties at low costs, we worked the computerization of duties done at our Radiology Department by using the EUC. Specifically, we used software of general-purpose database to build the system with functions dealing with records on implementing medical examinations and treatments, examination booking and diagnostic imaging report. This system which has been developed according to details of conventional duties and requests from medical personnel makes it possible to alleviate the duties which were done manually. PMID:19420829

  14. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  15. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    SciTech Connect

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Micheal J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing>10%5E16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m%5E3 and 3MW, giving the brain a 10%5E12 advantage in operations/s/W/cm%5E3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  16. A dynamical-systems approach for computing ice-affected streamflow

    USGS Publications Warehouse

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  17. Alternative cosmology from cusp geometries

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Herbin Stalder Díaz, Diego

    We study an alternative geometrical approach on the problem of classical cosmological singularity. It is based on a generalized function f(x,y)=x(2+y^2=(1-z)z^n) which consists of a cusped projected coupled isosurface. Such a projected geometry is computed and analized into the context of Friedmann singularity-free cosmology where a pre-big bang scenario is considered. Assuming that the mechanism of cusp formation is described by non-linear oscillations of a pre- big bang extended very high energy density field (>3x10^{94} kg/m^3$), we show that the action under the gravitational field follows a tautochrone of revolution, understood here as the primary projected geometry that alternatively replaces the Friedmann singularity in the standard big bang theory. As shown here this new approach allows us to interpret the nature of both matter and dark energy from first geometric principles [1]. [1] Rosa et al. DOI: 10.1063/1.4756991

  18. A computational approach to the twin paradox in curved spacetime

    NASA Astrophysics Data System (ADS)

    Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng

    2016-09-01

    Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  19. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach

    PubMed Central

    Eslami-Mossallam, Behrouz; Schram, Raoul D.; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  20. Open-ended approaches to science assessment using computers

    NASA Astrophysics Data System (ADS)

    Singley, Mark K.; Taft, Hessy L.

    1995-03-01

    We discuss the potential role of technology in evaluating learning outcomes in large-scale, widespread science assessments of the kind typically done at ETS, such as the GRE, or the College Board SAT II Subject Tests. We describe the current state-of-the-art in this area, as well as briefly outline the history of technology in large-scale science assessment and ponder possibilities for the future. We present examples from our own work in the domain of chemistry, in which we are designing problem solving interfaces and scoring programs for stoichiometric and other kinds of quantitative problem solving. We also present a new scientific reasoning item type that we are prototyping on the computer. It is our view that the technological infrastructure for large-scale constructed response science assessment is well on its way to being available, although many technical and practical hurdles remain.

  1. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach.

    PubMed

    Eslami-Mossallam, Behrouz; Schram, Raoul D; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  2. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    NASA Astrophysics Data System (ADS)

    Johansen, Stein E.

    2014-12-01

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci "atoms" and "molecules" consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a "positive approach". In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  3. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    SciTech Connect

    Johansen, Stein E.

    2014-12-10

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci 'atoms' and 'molecules' consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a 'positive approach'. In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  4. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    DOE PAGESBeta

    Berman, G. P.; Kamenev, D. I.; Tsifrinovich, V. I.

    2003-01-01

    A dynmore » amics of a nuclear-spin quantum computer with a large number ( L = 1000 ) of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.« less

  5. A Functional Analytic Approach To Computer-Interactive Mathematics

    PubMed Central

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471

  6. Computer-aided liver surgery planning: an augmented reality approach

    NASA Astrophysics Data System (ADS)

    Bornik, Alexander; Beichel, Reinhard; Reitinger, Bernhard; Gotschuli, Georg; Sorantin, Erich; Leberl, Franz W.; Sonka, Milan

    2003-05-01

    Surgical resection of liver tumors requires a detailed three-dimensional understanding of a complex arrangement of vasculature, liver segments and tumors inside the liver. In most cases, surgeons need to develop this understanding by looking at sequences of axial images from modalities like X-ray computed tomography. A system for liver surgery planning is reported that enables physicians to visualize and refine segmented input liver data sets, as well as to simulate and evaluate different resections plans. The system supports surgeons in finding the optimal treatment strategy for each patient and eases the data preparation process. The use of augmented reality contributes to a user-friendly design and simplifies complex interaction with 3D objects. The main function blocks developed so far are: basic augmented reality environment, user interface, rendering, surface reconstruction from segmented volume data sets, surface manipulation and quantitative measurement toolkit. The flexible design allows to add functionality via plug-ins. First practical evaluation steps have shown a good acceptance. Evaluation of the system is ongoing and future feedback from surgeons will be collected and used for design refinements.

  7. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    PubMed Central

    Herd, Seth A.; Krueger, Kai A.; Kriete, Trenton E.; Huang, Tsung-Ren; Hazy, Thomas E.; O'Reilly, Randall C.

    2013-01-01

    We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area. PMID:23935605

  8. Computational approaches to 3D modeling of RNA.

    PubMed

    Laing, Christian; Schlick, Tamar

    2010-07-21

    Many exciting discoveries have recently revealed the versatility of RNA and its importance in a variety of functions within the cell. Since the structural features of RNA are of major importance to their biological function, there is much interest in predicting RNA structure, either in free form or in interaction with various ligands, including proteins, metabolites and other molecules. In recent years, an increasing number of researchers have developed novel RNA algorithms for predicting RNA secondary and tertiary structures. In this review, we describe current experimental and computational advances and discuss recent ideas that are transforming the traditional view of RNA folding. To evaluate the performance of the most recent RNA 3D folding algorithms, we provide a comparative study in order to test the performance of available 3D structure prediction algorithms for an RNA data set of 43 structures of various lengths and motifs. We find that the algorithms vary widely in terms of prediction quality across different RNA lengths and topologies; most predictions have very large root mean square deviations from the experimental structure. We conclude by outlining some suggestions for future RNA folding research. PMID:21399271

  9. Computational approach to phenomenological mesoscopic field dislocation mechanics

    NASA Astrophysics Data System (ADS)

    Roy, Anish

    2005-11-01

    A variety of physically observed size-effects and patterning behavior in plastic response at the micron scale and below have raised interesting challenges for the modeling of plastic flow at these scales. In this thesis, two such models appropriate for length scales of < 0.1mum and 0.1mum-100mum are considered. The first (FDM) is conceptually appropriate for scales where all dislocations are resolved. The second (PMFDM) is a moving space-time averaged version of the first, appropriate for mesoscopic plasticity. In the first part of the thesis, FDM is shown to be capable of representing the elastic stress fields of dislocation distributions in a generally anisotropic medium of finite extent. It is also shown to have some success, naturally limited as expected, in prediction of yield drop, back stress and development of inhomogeneity from homogeneous initial conditions and boundary conditions which would otherwise produce homogeneous deformation in conventional plasticity. The space-time averaged version of FDM, PMFDM, requires additional closure statements due to the inherent nonlinearity of FDM. This is achieved through the use of a robust macroscopic model of strain-gradient plasticity that attempts to model effects of geometrically-necessary dislocations only in work-hardening. Finite element method-based computational predictions of the theory demonstrate several experimentally observed features of meso and macro scale plasticity. The model, which fundamentally accounts for fine scale dislocation mechanisms, seems to be an adequate representation of plasticity for these scales.

  10. A machine learning approach to computer-aided molecular design.

    PubMed

    Bolis, G; Di Pace, L; Fabrocini, F

    1991-12-01

    Preliminary results of a machine learning application concerning computer-aided molecular design applied to drug discovery are presented. The artificial intelligence techniques of machine learning use a sample of active and inactive compounds, which is viewed as a set of positive and negative examples, to allow the induction of a molecular model characterizing the interaction between the compounds and a target molecule. The algorithm is based on a twofold phase. In the first one--the specialization step--the program identifies a number of active/inactive pairs of compounds which appear to be the most useful in order to make the learning process as effective as possible and generates a dictionary of molecular fragments, deemed to be responsible for the activity of the compounds. In the second phase--the generalization step--the fragments thus generated are combined and generalized in order to select the most plausible hypothesis with respect to the sample of compounds. A knowledge base concerning physical and chemical properties is utilized during the inductive process. PMID:1818094

  11. Computational Geometry and Computer-Aided Design

    NASA Technical Reports Server (NTRS)

    Fay, T. H. (Compiler); Shoosmith, J. N. (Compiler)

    1985-01-01

    Extended abstracts of papers addressing the analysis, representation, and synthesis of shape information are presented. Curves and shape control, grid generation and contouring, solid modelling, surfaces, and curve intersection are specifically addressed.

  12. A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae

    PubMed Central

    Chu, Daniel B.; Burgess, Sean M.

    2016-01-01

    Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203

  13. A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae.

    PubMed

    Chu, Daniel B; Burgess, Sean M

    2016-03-01

    Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203

  14. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... Assumption Buster Workshop: ``Current Implementations of Cloud Computing Indicate a New Approach to Security...: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data and computation. Cloud is a...

  15. An integrative computational approach for prioritization of genomic variants.

    PubMed

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Cem, Meydan; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R; Mirzaa, Ghayda M; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E; Ross, M Elizabeth; Maltsev, Natalia; Gilliam, T Conrad

    2014-01-01

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest. PMID:25506935

  16. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers. PMID:19258199

  17. An Integrative Computational Approach for Prioritization of Genomic Variants

    PubMed Central

    Wang, Sheng; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad

    2014-01-01

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest. PMID:25506935

  18. Analyses of Physcomitrella patens Ankyrin Repeat Proteins by Computational Approach

    PubMed Central

    Mahmood, Niaz; Tamanna, Nahid

    2016-01-01

    Ankyrin (ANK) repeat containing proteins are evolutionary conserved and have functions in crucial cellular processes like cell cycle regulation and signal transduction. In this study, through an entirely in silico approach using the first release of the moss genome annotation, we found that at least 54 ANK proteins are present in P. patens. Based on their differential domain composition, the identified ANK proteins were classified into nine subfamilies. Comparative analysis of the different subfamilies of ANK proteins revealed that P. patens contains almost all the known subgroups of ANK proteins found in the other angiosperm species except for the ones having the TPR domain. Phylogenetic analysis using full length protein sequences supported the subfamily classification where the members of the same subfamily almost always clustered together. Synonymous divergence (dS) and nonsynonymous divergence (dN) ratios showed positive selection for the ANK genes of P. patens which probably helped them to attain significant functional diversity during the course of evolution. Taken together, the data provided here can provide useful insights for future functional studies of the proteins from this superfamily as well as comparative studies of ANK proteins. PMID:27429806

  19. An integrative computational approach for prioritization of genomic variants

    SciTech Connect

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Meydan, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad; Huang, Qingyang

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.

  20. An integrative computational approach for prioritization of genomic variants

    DOE PAGESBeta

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Meydan, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; et al

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidatemore » genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.« less

  1. Asynchronous event-based hebbian epipolar geometry.

    PubMed

    Benosman, Ryad; Ieng, Sio-Hoï; Rogister, Paul; Posch, Christoph

    2011-11-01

    Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor. PMID:21954205

  2. A Knowledge Engineering Approach to Developing Educational Computer Games for Improving Students' Differentiating Knowledge

    ERIC Educational Resources Information Center

    Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen

    2013-01-01

    Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…

  3. Freezing in confined geometries

    NASA Technical Reports Server (NTRS)

    Sokol, P. E.; Ma, W. J.; Herwig, K. W.; Snow, W. M.; Wang, Y.; Koplik, Joel; Banavar, Jayanth R.

    1992-01-01

    Results of detailed structural studies, using elastic neutron scattering, of the freezing of liquid O2 and D2 in porous vycor glass, are presented. The experimental studies have been complemented by computer simulations of the dynamics of freezing of a Lennard-Jones liquid in narrow channels bounded by molecular walls. Results point to a new simple physical interpretation of freezing in confined geometries.

  4. A Computational Approach to Understand In Vitro Alveolar Morphogenesis

    PubMed Central

    Kim, Sean H. J.; Yu, Wei; Mostov, Keith; Matthay, Michael A.; Hunt, C. Anthony

    2009-01-01

    Primary human alveolar type II (AT II) epithelial cells maintained in Matrigel cultures form alveolar-like cysts (ALCs) using a cytogenesis mechanism that is different from that of other studied epithelial cell types: neither proliferation nor death is involved. During ALC formation, AT II cells engage simultaneously in fundamentally different, but not fully characterized activities. Mechanisms enabling these activities and the roles they play during different process stages are virtually unknown. Identifying, characterizing, and understanding the activities and mechanisms are essential to achieving deeper insight into this fundamental feature of morphogenesis. That deeper insight is needed to answer important questions. When and how does an AT cell choose to switch from one activity to another? Why does it choose one action rather than another? We report obtaining plausible answers using a rigorous, multi-attribute modeling and simulation approach that leveraged earlier efforts by using new, agent and object-oriented capabilities. We discovered a set of cell-level operating principles that enabled in silico cells to self-organize and generate systemic cystogenesis phenomena that are quantitatively indistinguishable from those observed in vitro. Success required that the cell components be quasi-autonomous. As simulation time advances, each in silico cell autonomously updates its environment information to reclassify its condition. It then uses the axiomatic operating principles to execute just one action for each possible condition. The quasi-autonomous actions of individual in silico cells were sufficient for developing stable cyst-like structures. The results strengthen in silico to in vitro mappings at three levels: mechanisms, behaviors, and operating principles, thereby achieving a degree of validation and enabling answering the questions posed. We suggest that the in silico operating principles presented may have a biological counterpart and that a

  5. Space base laser torque applied on LEO satellites of various geometries at satellite’s closest approach

    NASA Astrophysics Data System (ADS)

    Khalifa, N. S.

    2013-12-01

    In light of using laser power in space applications, the motivation of this paper is to use a space based solar pumped laser to produce a torque on LEO satellites of various shapes. It is assumed that there is a space station that fires laser beam toward the satellite so the beam spreading due to diffraction is considered to be the dominant effect on the laser beam propagation. The laser torque is calculated at the point of closest approach between the space station and some sun synchronous low Earth orbit cubesats. The numerical application shows that space based laser torque has a significant contribution on the LEO cubesats. It has a maximum value in the order of 10-8 Nm which is comparable with the residual magnetic moment. However, it has a minimum value in the order 10-11 Nm which is comparable with the aerodynamic and gravity gradient torque. Consequently, space based laser torque can be used as an active attitude control system.

  6. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    SciTech Connect

    Fillippi, Anthony; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  7. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    SciTech Connect

    Filippi, Anthony M; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  8. A new computer approach to mixed feature classification for forestry application

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1976-01-01

    A computer approach for mapping mixed forest features (i.e., types, classes) from computer classification maps is discussed. Mixed features such as mixed softwood/hardwood stands are treated as admixtures of softwood and hardwood areas. Large-area mixed features are identified and small-area features neglected when the nominal size of a mixed feature can be specified. The computer program merges small isolated areas into surrounding areas by the iterative manipulation of the postprocessing algorithm that eliminates small connected sets. For a forestry application, computer-classified LANDSAT multispectral scanner data of the Sam Houston National Forest were used to demonstrate the proposed approach. The technique was successful in cleaning the salt-and-pepper appearance of multiclass classification maps and in mapping admixtures of softwood areas and hardwood areas. However, the computer-mapped mixed areas matched very poorly with the ground truth because of inadequate resolution and inappropriate definition of mixed features.

  9. Use of CAD Geometry in MDO

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1996-01-01

    The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.

  10. Computational Approach for Ranking Mutant Enzymes According to Catalytic Reaction Rates

    PubMed Central

    Kumarasiri, Malika; Baker, Gregory A.; Soudackov, Alexander V.

    2009-01-01

    A computationally efficient approach for ranking mutant enzymes according to the catalytic reaction rates is presented. This procedure requires the generation and equilibration of the mutant structures, followed by the calculation of partial free energy curves using an empirical valence bond potential in conjunction with biased molecular dynamics simulations and umbrella integration. The individual steps are automated and optimized for computational efficiency. This approach is used to rank a series of 15 dihydrofolate reductase mutants according to the hydride transfer reaction rate. The agreement between the calculated and experimental changes in the free energy barrier upon mutation is encouraging. The computational approach predicts the correct direction of the change in free energy barrier for all mutants, and the correlation coefficient between the calculated and experimental data is 0.82. This general approach for ranking protein designs has implications for protein engineering and drug design. PMID:19235997

  11. Teleportation-based quantum computation, extended Temperley-Lieb diagrammatical approach and Yang-Baxter equation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Zhang, Kun; Pang, Jinglong

    2016-01-01

    This paper focuses on the study of topological features in teleportation-based quantum computation and aims at presenting a detailed review on teleportation-based quantum computation (Gottesman and Chuang in Nature 402: 390, 1999). In the extended Temperley-Lieb diagrammatical approach, we clearly show that such topological features bring about the fault-tolerant construction of both universal quantum gates and four-partite entangled states more intuitive and simpler. Furthermore, we describe the Yang-Baxter gate by its extended Temperley-Lieb configuration and then study teleportation-based quantum circuit models using the Yang-Baxter gate. Moreover, we discuss the relationship between the extended Temperley-Lieb diagrammatical approach and the Yang-Baxter gate approach. With these research results, we propose a worthwhile subject, the extended Temperley-Lieb diagrammatical approach, for physicists in quantum information and quantum computation.

  12. Interactions between pool geometry and hydraulics

    USGS Publications Warehouse

    Thompson, D.M.; Nelson, J.M.; Wohl, E.E.

    1998-01-01

    An experimental and computational research approach was used to determine interactions between pool geometry and hydraulics. A 20-m-long, 1.8-m-wide flume was used to investigate the effect of four different geometric aspects of pool shape on flow velocity. Plywood sections were used to systematically alter constriction width, pool depth, pool length, and pool exit-slope gradient, each at two separate levels. Using the resulting 16 unique geometries with measured pool velocities in four-way factorial analyses produced an empirical assessment of the role of the four geometric aspects on the pool flow patterns and hence the stability of the pool. To complement the conclusions of these analyses, a two-dimensional computational flow model was used to investigate the relationships between pool geometry and flow patterns over a wider range of conditions. Both experimental and computational results show that constriction and depth effects dominate in the jet section of the pool and that pool length exhibits an increasing effect within the recirculating-eddy system. The pool exit slope appears to force flow reattachment. Pool length controls recirculating-eddy length and vena contracta strength. In turn, the vena contracta and recirculating eddy control velocities throughout the pool.

  13. Use of Integrated Computational Approaches in the Search for New Therapeutic Agents.

    PubMed

    Persico, Marco; Di Dato, Antonio; Orteca, Nausicaa; Cimino, Paola; Novellino, Ettore; Fattorusso, Caterina

    2016-09-01

    Computer-aided drug discovery plays a strategic role in the development of new potential therapeutic agents. Nevertheless, the modeling of biological systems still represents a challenge for computational chemists and at present a single computational method able to face such challenge is not available. This prompted us, as computational medicinal chemists, to develop in-house methodologies by mixing various bioinformatics and computational tools. Importantly, thanks to multi-disciplinary collaborations, our computational studies were integrated and validated by experimental data in an iterative process. In this review, we describe some recent applications of such integrated approaches and how they were successfully applied in i) the search of new allosteric inhibitors of protein-protein interactions and ii) the development of new redox-active antimalarials from natural leads. PMID:27546035

  14. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    PubMed

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. PMID:27289536

  15. Computer-aided analysis of Landsat-1 MSS data - A comparison of three approaches, including a 'modified clustering' approach

    NASA Technical Reports Server (NTRS)

    Fleming, M. D.; Berkebile, J. S.; Hoffer, R. M.

    1975-01-01

    Three approaches for analyzing Landsat-1 data from Ludwig Mountain in the San Juan Mountain range in Colorado are considered. In the 'supervised' approach the analyst selects areas of known spectral cover types and specifies these to the computer as training fields. Statistics are obtained for each cover type category and the data are classified. Such classifications are called 'supervised' because the analyst has defined specific areas of known cover types. The second approach uses a clustering algorithm which divides the entire training area into a number of spectrally distinct classes. Because the analyst need not define particular portions of the data for use but has only to specify the number of spectral classes into which the data is to be divided, this classification is called 'nonsupervised'. A hybrid method which selects training areas of known cover type but then uses the clustering algorithm to refine the data into a number of unimodal spectral classes is called the 'modified-supervised' approach.

  16. Logo Activities in Elementary Geometry.

    ERIC Educational Resources Information Center

    Libeskind, Shlomo; And Others

    These activities were designed for use at the University of Montana, where they were tested for four quarters in a mathematics for elementary teachers course on informal geometry. They are for use with Apple II-Plus computers with 64K memory or Apple IIe computers and MIT Logo. (Modifications are necessary if the activities are to be used with…

  17. Non-invasive computation of aortic pressure maps: a phantom-based study of two approaches

    NASA Astrophysics Data System (ADS)

    Delles, Michael; Schalck, Sebastian; Chassein, Yves; Müller, Tobias; Rengier, Fabian; Speidel, Stefanie; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2014-03-01

    Patient-specific blood pressure values in the human aorta are an important parameter in the management of cardiovascular diseases. A direct measurement of these values is only possible by invasive catheterization at a limited number of measurement sites. To overcome these drawbacks, two non-invasive approaches of computing patient-specific relative aortic blood pressure maps throughout the entire aortic vessel volume are investigated by our group. The first approach uses computations from complete time-resolved, three-dimensional flow velocity fields acquired by phasecontrast magnetic resonance imaging (PC-MRI), whereas the second approach relies on computational fluid dynamics (CFD) simulations with ultrasound-based boundary conditions. A detailed evaluation of these computational methods under realistic conditions is necessary in order to investigate their overall robustness and accuracy as well as their sensitivity to certain algorithmic parameters. We present a comparative study of the two blood pressure computation methods in an experimental phantom setup, which mimics a simplified thoracic aorta. The comparative analysis includes the investigation of the impact of algorithmic parameters on the MRI-based blood pressure computation and the impact of extracting pressure maps in a voxel grid from the CFD simulations. Overall, a very good agreement between the results of the two computational approaches can be observed despite the fact that both methods used completely separate measurements as input data. Therefore, the comparative study of the presented work indicates that both non-invasive pressure computation methods show an excellent robustness and accuracy and can therefore be used for research purposes in the management of cardiovascular diseases.

  18. A uniform algebraically-based approach to computational physics and efficient programming

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2007-03-01

    We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.

  19. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  20. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  1. Massless Flavor in Geometry and Matrix Models

    SciTech Connect

    Roiban, Radu; Tatar, Radu; Walcher, Johannes

    2003-01-27

    The proper inclusion of flavor in the Dijkgraaf-Vafa proposal for the solution of N=1 gauge theories through matrix models has been subject of debate in the recent literature. We here reexamine this issue by geometrically engineering fundamental matter with type IIB branes wrapped on non-compact cycles in the resolved geometry, and following them through the geometric transition. Our approach treats massive and massless flavor fields on equal footing, including the mesons. We also study the geometric transitions and superpotentials for finite mass of the adjoint field. All superpotentials we compute reproduce the field theory results. Crucial insights come from T-dual brane constructions in type IIA.

  2. Decreasing Computer Anxiety and Increasing Computer Usage among Early Childhood Education Majors through a Hands-On Approach in a Nonthreatening Environment.

    ERIC Educational Resources Information Center

    Castleman, Jacquelyn B.

    This practicum was designed to lessen the computer anxiety of early childhood education majors enrolled in General Curriculum or General Methods courses, to assist them in learning more about computer applications, and to increase the amount of time spent using computers. Weekly guidelines were given to the students, and a hands-on approach was…

  3. Robot Geometry and the High School Curriculum.

    ERIC Educational Resources Information Center

    Meyer, Walter

    1988-01-01

    Description of the field of robotics and its possible use in high school computational geometry classes emphasizes motion planning exercises and computer graphics displays. Eleven geometrical problems based on robotics are presented along with the correct solutions and explanations. (LRW)

  4. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    NASA Astrophysics Data System (ADS)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  5. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    NASA Astrophysics Data System (ADS)

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  6. A "Black-and-White Box" Approach to User Empowerment with Component Computing

    ERIC Educational Resources Information Center

    Kynigos, C.

    2004-01-01

    The paper discusses three aspects central to the 10 year-old process of design, development and use of E-slate, a construction kit for educational software. These are: (1) the design of computational media for user empowerment, (2) the socially-grounded approach to the building of user communities and (3) the issue of long-term sustainability as…

  7. An Overview of Three Approaches to Scoring Written Essays by Computer.

    ERIC Educational Resources Information Center

    Rudner, Lawrence; Gagne, Phill

    2001-01-01

    Describes the three most promising approaches to essay scoring by computer: (1) Project Essay Grade (PEG; E. Page, 1966); (2) Intelligent Essay Assessor (IEA; T. Landauer, 1997); and (3) E-rater (J. Burstein, Educational Testing Service). All of these proprietary systems return grades that correlate meaningfully with those of human raters. (SLD)

  8. An Overview of Three Approaches to Scoring Written Essays by Computer. ERIC Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence; Gagne, Phill

    This digest describes the three most prominent approaches to essay scoring by computer: (1) Project Essay Grade (PEG), introduced by E. Page in 1966; (2) Intelligent Essay Assessor (IEA), introduced for essay grading in 1997 by T. Landauer and P. Foltz; and (3) e-rater, used by the Educational Testing Service and developed by J. Burstein. PEG…

  9. EMERGING MOLECULAR AND COMPUTATIONAL APPROACHES FOR CROSS-SPECIES EXTRAPLATIONS: A WORKSHOP SUMMARY REPORT

    EPA Science Inventory

    Benson, W.H., R.T. Di Giulio, J.C. Cook, J. Freedman, R.L. Malek, C. Thompson and D. Versteeg. In press. Emerging Molecular and Computational Approaches for Cross-Species Extrapolations: A Workshop Summary Report (Abstract). To be presented at the SETAC Fourth World Congress, 14-...

  10. The Effects of Computer Supported Problem Based Learning on Students' Approaches to Learning

    ERIC Educational Resources Information Center

    Ak, Serife

    2011-01-01

    The purpose of this paper is to investigate the effects of computer supported problem based learning on students' approaches to learning. The research was conducted as a pre-test and posttest one-grouped design used to achieve the objectives of the study. The experimental process of study lasted 5 weeks and was carried out on 78 university…

  11. Adaptive Computer-Assisted Tutorials: A Cybernetic Approach Optimization with Finite-State Machines.

    ERIC Educational Resources Information Center

    Offir, Joseph

    This paper presents the concepts of a computer-directed system to improve human performance in structured learning situations. Attention is focused on finite-state systems in order to provide a systematic method for constructing training systems and to assist in analysis of problem solving and curriculum planning. The finite-state approach allows…

  12. Application of Computational Toxicological Approaches in Supporting Human Health Risk Assessment, Project Summary

    EPA Science Inventory

    Summary

    This project has three parts. The first part focuses on developing a tiered strategy and applying computational toxicological approaches to support human health risk assessment by deriving a surrogate point-of-departure (e.g., NOAEL, LOAEL, etc.) using a test c...

  13. A Computer-Based Spatial Learning Strategy Approach That Improves Reading Comprehension and Writing

    ERIC Educational Resources Information Center

    Ponce, Hector R.; Mayer, Richard E.; Lopez, Mario J.

    2013-01-01

    This article explores the effectiveness of a computer-based spatial learning strategy approach for improving reading comprehension and writing. In reading comprehension, students received scaffolded practice in translating passages into graphic organizers. In writing, students received scaffolded practice in planning to write by filling in graphic…

  14. A SAND approach based on cellular computation models for analysis and optimization

    NASA Astrophysics Data System (ADS)

    Canyurt, O. E.; Hajela, P.

    2007-06-01

    Genetic algorithms (GAs) have received considerable recent attention in problems of design optimization. The mechanics of population-based search in GAs are highly amenable to implementation on parallel computers. The present article describes a fine-grained model of parallel GA implementation that derives from a cellular-automata-like computation. The central idea behind the cellular genetic algorithm (CGA) approach is to treat the GA population as being distributed over a 2-D grid of cells, with each member of the population occupying a particular cell and defining the state of that cell. Evolution of the cell state is tantamount to updating the design information contained in a cell site and, as in cellular automata computations, takes place on the basis of local interaction with neighbouring cells. A special focus of the article is in the use of cellular automata (CA)-based models for structural analysis in conjunction with the CGA approach to optimization. In such an approach, the analysis and optimization are evolved simultaneously in a unified cellular computational framework. The article describes the implementation of this approach and examines its efficiency in the context of representative structural optimization problems.

  15. The Computational Experiment and Its Effects on Approach to Learning and Beliefs on Physics

    ERIC Educational Resources Information Center

    Psycharis, Sarantos

    2011-01-01

    Contemporary instructional approaches expect students to be active producers of knowledge. This leads to the need for creation of instructional tools and tasks that can offer students opportunities for active learning. This study examines the effect of a computational experiment as an instructional tool-for Grade 12 students, using a computer…

  16. Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach

    ERIC Educational Resources Information Center

    Khoumsi, Ahmed; Hadjou, Brahim

    2005-01-01

    Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…

  17. The Criterion-Related Validity of a Computer-Based Approach for Scoring Concept Maps

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Koul, Ravinder; Salehi, Roya

    2006-01-01

    This investigation seeks to confirm a computer-based approach that can be used to score concept maps (Poindexter & Clariana, 2004) and then describes the concurrent criterion-related validity of these scores. Participants enrolled in two graduate courses (n=24) were asked to read about and research online the structure and function of the heart…

  18. Computational enzyme design approaches with significant biological outcomes: progress and challenges

    PubMed Central

    Li, Xiaoman; Zhang, Ziding; Song, Jiangning

    2012-01-01

    Enzymes are powerful biocatalysts, however, so far there is still a large gap between the number of enzyme-based practical applications and that of naturally occurring enzymes. Multiple experimental approaches have been applied to generate nearly all possible mutations of target enzymes, allowing the identification of desirable variants with improved properties to meet the practical needs. Meanwhile, an increasing number of computational methods have been developed to assist in the modification of enzymes during the past few decades. With the development of bioinformatic algorithms, computational approaches are now able to provide more precise guidance for enzyme engineering and make it more efficient and less laborious. In this review, we summarize the recent advances of method development with significant biological outcomes to provide important insights into successful computational protein designs. We also discuss the limitations and challenges of existing methods and the future directions that should improve them. PMID:24688648

  19. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  20. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  1. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  2. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids

    PubMed Central

    Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260

  3. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids.

    PubMed

    Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260

  4. A concurrent hybrid Navier-Stokes/Euler approach to fluid dynamic computations

    NASA Technical Reports Server (NTRS)

    Tavella, Domingo A.; Djomehri, M. J.; Kislitzin, Katherine T.; Blake, Matthew W.; Erickson, Larry L.

    1993-01-01

    We present a methodology for the numerical simulation of flow fields by the simultaneous application of two distinct approaches to computational aerodynamics. We compute the three dimensional flow field of a missile at moderate angle of attack by dividing the flow field into two regions: a region near the surface where we use a structured grid and a Navier Stokes solver, and a region farther away from the surface where we utilize an unstructured grid and an Euler solver. The two solvers execute as independent UNIX processes either on the same machine or on two machines. The solvers communicate data across their common interfaces within the same machine or over the network. The computations indicate that extensively separated flow fields can be computed without significant distortion by combining viscous and inviscid solvers.

  5. Current Trend Towards Using Soft Computing Approaches to Phase Synchronization in Communication Systems

    NASA Technical Reports Server (NTRS)

    Drake, Jeffrey T.; Prasad, Nadipuram R.

    1999-01-01

    This paper surveys recent advances in communications that utilize soft computing approaches to phase synchronization. Soft computing, as opposed to hard computing, is a collection of complementary methodologies that act in producing the most desirable control, decision, or estimation strategies. Recently, the communications area has explored the use of the principal constituents of soft computing, namely, fuzzy logic, neural networks, and genetic algorithms, for modeling, control, and most recently for the estimation of phase in phase-coherent communications. If the receiver in a digital communications system is phase-coherent, as is often the case, phase synchronization is required. Synchronization thus requires estimation and/or control at the receiver of an unknown or random phase offset.

  6. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  7. Digital approach to planning computer-guided surgery and immediate provisionalization in a partially edentulous patient.

    PubMed

    Arunyanak, Sirikarn P; Harris, Bryan T; Grant, Gerald T; Morton, Dean; Lin, Wei-Shao

    2016-07-01

    This report describes a digital approach for computer-guided surgery and immediate provisionalization in a partially edentulous patient. With diagnostic data obtained from cone-beam computed tomography and intraoral digital diagnostic scans, a digital pathway of virtual diagnostic waxing, a virtual prosthetically driven surgical plan, a computer-aided design and computer-aided manufacturing (CAD/CAM) surgical template, and implant-supported screw-retained interim restorations were realized with various open-architecture CAD/CAM systems. The optional CAD/CAM diagnostic casts with planned implant placement were also additively manufactured to facilitate preoperative inspection of the surgical template and customization of the CAD/CAM-fabricated interim restorations. PMID:26868961

  8. An analytical approach to photonic reservoir computing - a network of SOA's - for noisy speech recognition

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Abiri, Ebrahim; Dehyadegari, Louiza

    2013-10-01

    This paper seeks to investigate an approach of photonic reservoir computing for optical speech recognition on an examination isolated digit recognition task. An analytical approach in photonic reservoir computing is further drawn on to decrease time consumption, compared to numerical methods; which is very important in processing large signals such as speech recognition. It is also observed that adjusting reservoir parameters along with a good nonlinear mapping of the input signal into the reservoir, analytical approach, would boost recognition accuracy performance. Perfect recognition accuracy (i.e. 100%) can be achieved for noiseless speech signals. For noisy signals with 0-10 db of signal to noise ratios, however, the accuracy ranges observed varied between 92% and 98%. In fact, photonic reservoir application demonstrated 9-18% improvement compared to classical reservoir networks with hyperbolic tangent nodes.

  9. Bayesian approaches to spatial inference: Modelling and computational challenges and solutions

    NASA Astrophysics Data System (ADS)

    Moores, Matthew; Mengersen, Kerrie

    2014-12-01

    We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.

  10. Integral geometry and holography

    DOE PAGESBeta

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; Sully, James

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulkmore » curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.« less

  11. Integral geometry and holography

    SciTech Connect

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; Sully, James

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulk curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.

  12. Quantum Consequences of Parameterizing Geometry

    NASA Astrophysics Data System (ADS)

    Wanas, M. I.

    2002-12-01

    The marriage between geometrization and quantization is not successful, so far. It is well known that quantization of gravity , using known quantization schemes, is not satisfactory. It may be of interest to look for another approach to this problem. Recently, it is shown that geometries with torsion admit quantum paths. Such geometries should be parameterizied in order to preserve the quantum properties appeared in the paths. The present work explores the consequences of parameterizing such geometry. It is shown that quantum properties, appeared in the path equations, are transferred to other geometric entities.

  13. An hybrid computing approach to accelerating the multiple scattering theory based ab initio methods

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Stocks, G. Malcolm

    2014-03-01

    The multiple scattering theory method, also known as the Korringa-Kohn-Rostoker (KKR) method, is considered an elegant approach to the ab initio electronic structure calculation for solids. Its convenience in accessing the one-electron Green function has led to the development of locally-self consistent multiple scattering (LSMS) method, a linear scaling ab initio method that allows for the electronic structure calculation for complex structures requiring tens of thousands of atoms in unit cell. It is one of the few applications that demonstrated petascale computing capability. In this presentation, we discuss our recent efforts in developing a hybrid computing approach for accelerating the full potential electronic structure calculation. Specifically, in the framework of our existing LSMS code in FORTRAN 90/95, we explore the many core resources on GPGPU accelerators by implementing the compute intensive functions (for the calculation of multiple scattering matrices and the single site solutions) in CUDA, and move the computational tasks to the GPGPUs if they are found available. We explain in details our approach to the CUDA programming and the code structure, and show the speed-up of the new hybrid code by comparing its performances on CPU/GPGPU and on CPU only. The work was supported in part by the Center for Defect Physics, a DOE-BES Energy Frontier Research Center.

  14. Toward a computational approach for collision avoidance with real-world scenes

    NASA Astrophysics Data System (ADS)

    Keil, Matthias S.; Rodriguez-Vazquez, Angel

    2003-04-01

    In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. In order to timely initiate escape behavior, these neurons must recognize a possible approach (or at least differentiate it from similar but non-threatening situations), and estimate the time-to-collision (ttc). Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, should thus be promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. Unfortunately, a corresponding computational architecture which is able to handle real-situations (e.g. moving backgrounds, different lighting conditions) is still not available (successful collision avoidance of a robot was demonstrated only for a closed environment). Here we present two computational models for signalling impending collision. These models are parsimonious since they possess only the minimum number of computational units which are essential to reproduce corresponding biological data. Our models show robust performance in adverse situations, such as with approaching low-contrast objects, or with highly textured backgrounds. Furthermore, a condition is proposed under which the responses of our models match the so-called eta-function. We finally discuss which components need to be added to our model to convert it into a full-fledged real-world-environment collision detector.

  15. Hybrid approach for fast occlusion processing in computer-generated hologram calculation.

    PubMed

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2016-07-10

    A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327

  16. Approaches for the computationally efficient assessment of the plug-in HEV impact on the grid

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Kyung; Filipi, Zoran S.

    2012-11-01

    Realistic duty cycles are critical for design and assessment of hybrid propulsion systems, in particular, plug-in hybrid electric vehicles. The analysis of the PHEV impact requires a large amount of data about daily missions for ensuring realism in predicted temporal loads on the grid. This paper presents two approaches for the reduction of the computational effort while assessing the large scale PHEV impact on the grid, namely 1) "response surface modelling" approach; and 2) "daily driving schedule modelling" approach. The response surface modelling approach replaces the time-consuming vehicle simulations by response surfaces constructed off-line with the consideration of the real-world driving. The daily driving modelling approach establishes a correlation between departure and arrival times, and it predicts representative driving patterns with a significantly reduced number of simulation cases. In both cases, representative synthetic driving cycles are used to capture the naturalistic driving characteristics for a given trip length. The proposed approaches enable construction of 24-hour missions, assessments of charging requirements at the time of plugging-in, and temporal distributions of the load on the grid with high computational efficiency.

  17. Computational Complementation: A Modelling Approach to Study Signalling Mechanisms during Legume Autoregulation of Nodulation

    PubMed Central

    Han, Liqi

    2010-01-01

    Autoregulation of nodulation (AON) is a long-distance signalling regulatory system maintaining the balance of symbiotic nodulation in legume plants. However, the intricacy of internal signalling and absence of flux and biochemical data, are a bottleneck for investigation of AON. To address this, a new computational modelling approach called “Computational Complementation” has been developed. The main idea is to use functional-structural modelling to complement the deficiency of an empirical model of a loss-of-function (non-AON) mutant with hypothetical AON mechanisms. If computational complementation demonstrates a phenotype similar to the wild-type plant, the signalling hypothesis would be suggested as “reasonable”. Our initial case for application of this approach was to test whether or not wild-type soybean cotyledons provide the shoot-derived inhibitor (SDI) to regulate nodule progression. We predicted by computational complementation that the cotyledon is part of the shoot in terms of AON and that it produces the SDI signal, a result that was confirmed by reciprocal epicotyl-and-hypocotyl grafting in a real-plant experiment. This application demonstrates the feasibility of computational complementation and shows its usefulness for applications where real-plant experimentation is either difficult or impossible. PMID:20195551

  18. Integrated geometry and grid generation system for complex configurations

    NASA Technical Reports Server (NTRS)

    Akdag, Vedat; Wulf, Armin

    1992-01-01

    A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.

  19. MRI-Based Computational Fluid Dynamics in Experimental Vascular Models: Toward the Development of an Approach for Prediction of Cardiovascular Changes During Prolonged Space Missions

    NASA Technical Reports Server (NTRS)

    Spirka, T. A.; Myers, J. G.; Setser, R. M.; Halliburton, S. S.; White, R. D.; Chatzimavroudis, G. P.

    2005-01-01

    A priority of NASA is to identify and study possible risks to astronauts health during prolonged space missions [l]. The goal is to develop a procedure for a preflight evaluation of the cardiovascular system of an astronaut and to forecast how it will be affected during the mission. To predict these changes, a computational cardiovascular model must be constructed. Although physiology data can be used to make a general model, a more desirable subject-specific model requires anatomical, functional, and flow data from the specific astronaut. MRI has the unique advantage of providing images with all of the above information, including three-directional velocity data which can be used as boundary conditions in a computational fluid dynamics (CFD) program [2,3]. MRI-based CFD is very promising for reproduction of the flow patterns of a specific subject and prediction of changes in the absence of gravity. The aim of this study was to test the feasibility of this approach by reconstructing the geometry of MRI-scanned arterial models and reproducing the MRI-measured velocities using CFD simulations on these geometries.

  20. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328