Science.gov

Sample records for computational geometry approach

  1. A computational approach to modeling cellular-scale blood flow in complex geometry

    NASA Astrophysics Data System (ADS)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  2. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  3. A computational approach to continuum damping of Alfven waves in two and three-dimensional geometry

    SciTech Connect

    Koenies, Axel; Kleiber, Ralf

    2012-12-15

    While the usual way of calculating continuum damping of global Alfven modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  4. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  5. A Whirlwind Tour of Computational Geometry.

    ERIC Educational Resources Information Center

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  6. Computational Approaches to the Determination of the Molecular Geometry of Acrolein in its T_1(n,π*) State

    NASA Astrophysics Data System (ADS)

    McAnally, Michael O.; Hlavacek, Nikolaus C.; Drucker, Stephen

    2012-06-01

    The spectroscopically derived inertial constants for acrolein (propenal) in its T_1(n,π*) state were used to test predictions from a variety of computational methods. One focus was on multiconfigurational methods, such as CASSCF and CASPT2, that are applicable to excited states. We also examined excited-state methods that utilize single reference configurations, including EOM-EE-CCSD and TD-PBE0. Finally, we applied unrestricted ground-state techniques, such as UCCSD(T) and the more economical UPBE0 method, to the T_1(n,π*) excited state under the constraint of C_s symmetry. The unrestricted ground-state methods are applicable because at a planar geometry, the T_1(n,π*) state of acrolein is the lowest-energy state of its spin multiplicity. Each of the above methods was used with a triple zeta quality basis set to optimize the T_1(n,π*) geometry. This procedure resulted in the following sets of inertial constants: Inertial constants (cm-1) of acrolein in its T_1(n,π*) state Method A B C Method A B C CASPT2(6,5) 1.667 0.1491 0.1368 UCCSD(T)^b 1.668 0.1480 0.1360 CASSCF(6,5) 1.667 0.1491 0.1369 UPBE0 1.699 0.1487 0.1367 EOM-EE-CCSD 1.675 0.1507 0.1383 TD-PBE0 1.719 0.1493 0.1374 Experiment^a 1.662 0.1485 0.1363 The two multiconfigurational methods produce the same inertial constants, and those constants agree closely with experiment. However the sets of computed bond lengths differ significantly for the two methods. In the CASSCF calculation, the lengthening of the C=O and C=C bonds and the shortening of the C--C bond are more pronounced than in CASPT2. O. S. Bokareva et al., Int. J. Quant. Chem. {108}, 2719 (2008).

  7. Geometry of quantum computation with qutrits.

    PubMed

    Li, Bin; Yu, Zu-Huan; Fei, Shao-Ming

    2013-01-01

    Determining the quantum circuit complexity of a unitary operation is an important problem in quantum computation. By using the mathematical techniques of Riemannian geometry, we investigate the efficient quantum circuits in quantum computation with n qutrits. We show that the optimal quantum circuits are essentially equivalent to the shortest path between two points in a certain curved geometry of SU(3(n)). As an example, three-qutrit systems are investigated in detail.

  8. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  9. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    satisfying a polynomial growth condi- tion: for such a given function, computation of its Fenchel dual/conjugate is polynomial-time reducible to...computation of the given function. Hence the computation of a norm or a convex function of polynomial growth is NP-hard if and only if the computation of... growth and that of its Fenchel dual. This paper has been submitted and is available as a preprint (see [19] in Section 4) Semialgebraic geometry of

  10. Quadric solids and computational geometry

    SciTech Connect

    Emery, J.D.

    1980-07-25

    As part of the CAD-CAM development project, this report discusses the mathematics underlying the program QUADRIC, which does computations on objects modeled as Boolean combinations of quadric half-spaces. Topics considered include projective space, quadric surfaces, polars, affine transformations, the construction of solids, shaded image, the inertia tensor, moments, volume, surface integrals, Monte Carlo integration, and stratified sampling. 1 figure.

  11. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  12. Randomized Control Trials on the Dynamic Geometry Approach

    ERIC Educational Resources Information Center

    Jiang, Zhonghong; White, Alexander; Rosenwasser, Alana

    2011-01-01

    The project reported here is conducting repeated randomized control trials of an approach to high school geometry that utilizes Dynamic Geometry (DG) software to supplement ordinary instructional practices. It compares effects of that intervention with standard instruction that does not make use of computer drawing/exploration tools. The basic…

  13. Computational fluid dynamics using CATIA created geometry

    NASA Astrophysics Data System (ADS)

    Gengler, Jeanne E.

    1989-07-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  14. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors

    NASA Astrophysics Data System (ADS)

    Cenek, Martin; Dahl, Spencer K.

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  15. Computation of three-phase capillary entry pressures and arc menisci configurations in pore geometries from 2D rock images: A combinatorial approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yingfang; Helland, Johan Olav; Hatzignatiou, Dimitrios G.

    2014-07-01

    We present a semi-analytical, combinatorial approach to compute three-phase capillary entry pressures for gas invasion into pore throats with constant cross-sections of arbitrary shapes that are occupied by oil and/or water. For a specific set of three-phase capillary pressures, geometrically allowed gas/oil, oil/water and gas/water arc menisci are determined by moving two circles in opposite directions along the pore/solid boundary for each fluid pair such that the contact angle is defined at the front circular arcs. Intersections of the two circles determine the geometrically allowed arc menisci for each fluid pair. The resulting interfaces are combined systematically to allow for all geometrically possible three-phase configuration changes. The three-phase extension of the Mayer and Stowe - Princen method is adopted to calculate capillary entry pressures for all determined configuration candidates, from which the most favorable gas invasion configuration is determined. The model is validated by comparing computed three-phase capillary entry pressures and corresponding fluid configurations with analytical solutions in idealized triangular star-shaped pores. It is demonstrated that the model accounts for all scenarios that have been analyzed previously in these shapes. Finally, three-phase capillary entry pressures and associated fluid configurations are computed in throat cross-sections extracted from segmented SEM images of Bentheim sandstone. The computed gas/oil capillary entry pressures account for the expected dependence of oil/water capillary pressure in spreading and non-spreading fluid systems at the considered wetting conditions. Because these geometries are irregular and include constrictions, we introduce three-phase displacements that have not been identified previously in pore-network models that are based on idealized pore shapes. However, in the limited number of pore geometries considered in this work, we find that the favorable displacements are

  16. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  17. A fractal approach to the dark silicon problem: A comparison of 3D computer architectures - Standard slices versus fractal Menger sponge geometry

    NASA Astrophysics Data System (ADS)

    Herrmann, Richard

    2015-01-01

    The dark silicon problem, which limits the power-growth of future computer generations, is interpreted as a heat energy transport problem when increasing the energy emitting surface area within a given volume. A comparison of two 3D-configuration models, namely a standard slicing and a fractal surface generation within the Menger sponge geometry is presented. It is shown, that for iteration orders $n>3$ the fractal model shows increasingly better thermal behavior. As a consequence cooling problems may be minimized by using a fractal architecture. Therefore the Menger sponge geometry is a good example for fractal architectures applicable not only in computer science, but also e.g. in chemistry when building chemical reactors, optimizing catalytic processes or in sensor construction technology building highly effective sensors for toxic gases or water analysis.

  18. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  19. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  20. Prime factorization using quantum annealing and computational algebraic geometry

    PubMed Central

    Dridi, Raouf; Alghassi, Hedayat

    2017-01-01

    We investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Gröbner bases. We present a novel autonomous algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over 200000, the largest number factored to date using a quantum processor. We also explain how Gröbner bases can be used to reduce the degree of Hamiltonians. PMID:28220854

  1. Prime factorization using quantum annealing and computational algebraic geometry

    NASA Astrophysics Data System (ADS)

    Dridi, Raouf; Alghassi, Hedayat

    2017-02-01

    We investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Gröbner bases. We present a novel autonomous algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over 200000, the largest number factored to date using a quantum processor. We also explain how Gröbner bases can be used to reduce the degree of Hamiltonians.

  2. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Durmus, Soner; Karakirik, Erol

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any computer software…

  3. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Karakirik, Erol; Durmus, Soner

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any compute software…

  4. Teaching Geometry: An Experiential and Artistic Approach.

    ERIC Educational Resources Information Center

    Ogletree, Earl J.

    The view that geometry should be taught at every grade level is promoted. Primary and elementary school children are thought to rarely have any direct experience with geometry, except on an incidental basis. Children are supposed to be able to learn geometry rather easily, so long as the method and content are adapted to their development and…

  5. Turbulent flow computations in complex geometries

    NASA Astrophysics Data System (ADS)

    Burns, A. D.; Clarke, D. S.; Jones, I. P.; Simcox, S.; Wilkes, N. S.

    The nonstaggered-grid Navier-Stokes algorithm of Rhie and Chow (1983) and its implementation in the FLOW3D code (Burns et al., 1987) are described, with a focus on their application to problems involving complex geometries. Results for the flow in a tile-lined burner and for the flow over an automobile model are presented in extensive graphs and discussed in detail, and the advantages of supercomputer vectorization of the code are considered.

  6. General Geometry PIC for MIMD Computers

    DTIC Science & Technology

    1993-06-01

    allow efficient usage of Distributed Memory MIMD architecture comput- ers. The report has been broken down into a number of largely self contained...BEST QUALITY AVAILABLE. THE COPY FURNISHED TO DTIC CONTAINED A SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. L * ’ L0 RFFX(93)56...Laboratory Abingdon 0 Oxfordshire OX14 3DB England June 1993 Document Control Number : AEA/TLNA/31858/RP/2 Date of Issue: July 8, 1993 Issue nunrber: 1 I

  7. Aircraft geometry verification with enhanced computer-generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer-generated, color-shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer-aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color-shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color-shaded display is presented. The results include examples of color-shaded displays, which are contrasted with wire-frame-type displays. The examples also show the use of mapped surface pressures in terms of color-shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  8. Aircraft geometry verification with enhanced computer generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  9. Grid generation and inviscid flow computation about aircraft geometries

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1989-01-01

    Grid generation and Euler flow about fighter aircraft are described. A fighter aircraft geometry is specified by an area ruled fuselage with an internal duct, cranked delta wing or strake/wing combinations, canard and/or horizontal tail surfaces, and vertical tail surfaces. The initial step before grid generation and flow computation is the determination of a suitable grid topology. The external grid topology that has been applied is called a dual-block topology which is a patched C (exp 1) continuous multiple-block system where inner blocks cover the highly-swept part of a cranked wing or strake, rearward inner-part of the wing, and tail components. Outer-blocks cover the remainder of the fuselage, outer-part of the wing, canards and extend to the far field boundaries. The grid generation is based on transfinite interpolation with Lagrangian blending functions. This procedure has been applied to the Langley experimental fighter configuration and a modified F-18 configuration. Supersonic flow between Mach 1.3 and 2.5 and angles of attack between 0 degrees and 10 degrees have been computed with associated Euler solvers based on the finite-volume approach. When coupling geometric details such as boundary layer diverter regions, duct regions with inlets and outlets, or slots with the general external grid, imposing C (exp 1) continuity can be extremely tedious. The approach taken here is to patch blocks together at common interfaces where there is no grid continuity, but enforce conservation in the finite-volume solution. The key to this technique is how to obtain the information required for a conservative interface. The Ramshaw technique which automates the computation of proportional areas of two overlapping grids on a planar surface and is suitable for coding was used. Researchers generated internal duct grids for the Langley experimental fighter configuration independent of the external grid topology, with a conservative interface at the inlet and outlet.

  10. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID

  11. Using Computer-Assisted Multiple Representations in Learning Geometry Proofs

    ERIC Educational Resources Information Center

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Hsi-Hsun; Cheng, Ying-Hao

    2011-01-01

    Geometry theorem proving involves skills that are difficult to learn. Instead of working with abstract and complicated representations, students might start with concrete, graphical representations. A proof tree is a graphical representation of a formal proof, with each node representing a proposition or given conditions. A computer-assisted…

  12. The Effects of Instructional Practices on Computation and Geometry Achievement.

    ERIC Educational Resources Information Center

    DeVaney, Thomas A.

    The purpose of this study was to examine the relationships between classroom instructional practices and computation and geometry achievement. Relationships between mathematics achievement and classroom characteristics were also explored. The sample of 1,032 students and their teachers (n=147) was selected from the 1992 Trial State Mathematics…

  13. Investigating the geometry of pig airways using computed tomography

    NASA Astrophysics Data System (ADS)

    Mansy, Hansen A.; Azad, Md Khurshidul; McMurray, Brandon; Henry, Brian; Royston, Thomas J.; Sandler, Richard H.

    2015-03-01

    Numerical modeling of sound propagation in the airways requires accurate knowledge of the airway geometry. These models are often validated using human and animal experiments. While many studies documented the geometric details of the human airways, information about the geometry of pig airways is scarcer. In addition, the morphology of animal airways can be significantly different from that of humans. The objective of this study is to measure the airway diameter, length and bifurcation angles in domestic pigs using computed tomography. After imaging the lungs of 3 pigs, segmentation software tools were used to extract the geometry of the airway lumen. The airway dimensions were then measured from the resulting 3 D models for the first 10 airway generations. Results showed that the size and morphology of the airways of different animals were similar. The measured airway dimensions were compared with those of the human airways. While the trachea diameter was found to be comparable to the adult human, the diameter, length and branching angles of other airways were noticeably different from that of humans. For example, pigs consistently had an early airway branching from the trachea that feeds the superior (top) right lung lobe proximal to the carina. This branch is absent in the human airways. These results suggested that the human geometry may not be a good approximation of the pig airways and may contribute to increasing the errors when the human airway geometric values are used in computational models of the pig chest.

  14. Computational geometry assessment for morphometric analysis of the mandible.

    PubMed

    Raith, Stefan; Varga, Viktoria; Steiner, Timm; Hölzle, Frank; Fischer, Horst

    2017-01-01

    This paper presents a fully automated algorithm for geometry assessment of the mandible. Anatomical landmarks could be reliably detected and distances were statistically evaluated with principal component analysis. The method allows for the first time to generate a mean mandible shape with statistically valid geometrical variations based on a large set of 497 CT-scans of human mandibles. The data may be used in bioengineering for designing novel oral implants, for planning of computer-guided surgery, and for the improvement of biomechanical models, as it is shown that commercially available mandible replicas differ significantly from the mean of the investigated population.

  15. Computer aided design and analysis of gear tooth geometry

    NASA Technical Reports Server (NTRS)

    Chang, S. H.; Huston, R. L.

    1987-01-01

    A simulation method for gear hobbing and shaping of straight and spiral bevel gears is presented. The method is based upon an enveloping theory for gear tooth profile generation. The procedure is applicable in the computer aided design of standard and nonstandard tooth forms. An inverse procedure for finding a conjugate gear tooth profile is presented for arbitrary cutter geometry. The kinematic relations for the tooth surfaces of straight and spiral bevel gears are proposed. The tooth surface equations for these gears are formulated in a manner suitable for their automated numerical development and solution.

  16. Learning Geometry through Discovery Learning Using a Scientific Approach

    ERIC Educational Resources Information Center

    In'am, Akhsanul; Hajar, Siti

    2017-01-01

    The objective of this present research is to analyze the implementation of learning geometry through a scientific learning consisting of three aspects: 1) teacher's activities, 2) students' activities and, 3) the achievement results. The adopted approach is a descriptive-quantitative one and the subject is the Class VII students of Islamic Junior…

  17. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    SciTech Connect

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  18. Representing Range Compensators with Computational Geometry in TOPAS

    SciTech Connect

    Iandola, Forrest N.; /Illinois U., Urbana /SLAC

    2012-09-07

    In a proton therapy beamline, the range compensator modulates the beam energy, which subsequently controls the depth at which protons deposit energy. In this paper, we introduce two computational representations of range compensator. One of our compensator representations, which we refer to as a subtraction solid-based range compensator, precisely represents the compensator. Our other representation, the 3D hexagon-based range compensator, closely approximates the compensator geometry. We have implemented both of these compensator models in a proton therapy Monte Carlo simulation called TOPAS (Tool for Particle Simulation). In the future, we will present a detailed study of the accuracy and runtime performance trade-offs between our two range compensator representations.

  19. Computation of recirculating compressible flow in axisymmetric geometries

    SciTech Connect

    Isaac, K.M.; Nejad, A.S.

    1985-01-01

    A computational study of compressible, turbulent, recirculating flow in axisymmetric geometries is reported in this paper. The SIMPLE algorithm was used in the differencing scheme and the k-epsilon model for turbulence was used for turbulence closure. Special attention was given to the specification of the boundary conditions. The study revealed the significant influence of the boundary conditions on the solution. The eddy length scale at the inlet to the solution domain was the most uncertain parameter in the specification of the boundary conditions. The predictions were compared with the recent data based on laser velocimetry. The two are seen to be in good agreement. The present study underscores the need to have a more reliable means of specifying the inlet boundary conditions for the k-epsilon turbulence model.

  20. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  1. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  2. Computational Analysis on Stent Geometries in Carotid Artery: A Review

    NASA Astrophysics Data System (ADS)

    Paisal, Muhammad Sufyan Amir; Taib, Ishkrizat; Ismail, Al Emran

    2017-01-01

    This paper reviews the work done by previous researchers in order to gather the information for the current study which about the computational analysis on stent geometry in carotid artery. The implantation of stent in carotid artery has become popular treatment for arterial diseases of hypertension such as stenosis, thrombosis, atherosclerosis and embolization, in reducing the rate of mortality and morbidity. For the stenting of an artery, the previous researchers did many type of mathematical models in which, the physiological variables of artery is analogized to electrical variables. Thus, the computational fluid dynamics (CFD) of artery could be done, which this method is also did by previous researchers. It lead to the current study in finding the hemodynamic characteristics due to artery stenting such as wall shear stress (WSS) and wall shear stress gradient (WSSG). Another objective of this study is to evaluate the nowadays stent configuration for full optimization in reducing the arterial side effect such as restenosis rate after a few weeks of stenting. The evaluation of stent is based on the decrease of strut-strut intersection, decrease of strut width and increase of the strut-strut spacing. The existing configuration of stents are actually good enough in widening the narrowed arterial wall but the disease such as thrombosis still occurs in early and late stage after the stent implantation. Thus, the outcome of this study is the prediction for the reduction of restenosis rate and the WSS distribution is predicted to be able in classifying which stent configuration is the best.

  3. Geometric algebra and information geometry for quantum computational software

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo

    2017-03-01

    The art of quantum algorithm design is highly nontrivial. Grover's search algorithm constitutes a masterpiece of quantum computational software. In this article, we use methods of geometric algebra (GA) and information geometry (IG) to enhance the algebraic efficiency and the geometrical significance of the digital and analog representations of Grover's algorithm, respectively. Specifically, GA is used to describe the Grover iterate and the discretized iterative procedure that exploits quantum interference to amplify the probability amplitude of the target-state before measuring the query register. The transition from digital to analog descriptions occurs via Stone's theorem which relates the (unitary) Grover iterate to a suitable (Hermitian) Hamiltonian that controls Schrodinger's quantum mechanical evolution of a quantum state towards the target state. Once the discrete-to-continuos transition is completed, IG is used to interpret Grover's iterative procedure as a geodesic path on the manifold of the parametric density operators of pure quantum states constructed from the continuous approximation of the parametric quantum output state in Grover's algorithm. Finally, we discuss the dissipationless nature of quantum computing, recover the quadratic speedup relation, and identify the superfluity of the Walsh-Hadamard operation from an IG perspective with emphasis on statistical mechanical considerations.

  4. SU-E-I-12: Flexible Geometry Computed Tomography

    SciTech Connect

    Shaw, R

    2015-06-15

    Purpose: The concept separates the mechanical connection between the radiation source and detector. This design allows the trajectory and orientation of the radiation source/detector to be customized to the object that is being imaged. This is in contrast to the formulaic rotation-translation image acquisition of conventional computed tomography(CT).Background/significance:CT devices that image a full range of: anatomy, patient populations, and imaging procedures are large. The root cause of the expanding size of comprehensive CT is due to the commitment to helical geometry that is hardwired into the image reconstruction. FGCT extends the application of alternative reconstruction techniques, i.e. tomosynthesis, by separating the two main components— radiation source and detector— and allow for 6 degrees of freedom motion for radiation source, detector, or both. The image acquisition geometry is then tailored to how the patient/object is positioned. This provides greater flexibility on the position and location that the patient/object is being imaged. Additionally, removing the need of a rotating gantry reduces the footprint so that CT is more mobile and more available to move to where the patient/object is at, instead of the other way around. Methods: As proof-of-principle, a reconstruction algorithm is designed to produce FGCT images. Using simulated detector data, voxels intersecting a line drawn between the radiation source and an individual detector are traced and modified using the detector signal. The detector signal is modified to compensate for changes in the source to detector distance. Adjacent voxels are modified in proportion to the detector signal, providing a simple image filter. Results: Image-quality from the proposed FGCT reconstruction technique is proving to be a challenge, producing hardily recognizable images from limited projections angles. Conclusion: Preliminary assessment of the reconstruction technique demonstrates the inevitable

  5. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  6. Automated Preparation of Geometry for Computational Applications Final Report

    DTIC Science & Technology

    2011-01-31

    the GPW exports the CAD geometry to commonly used grid generation tools such as Chimera Grid Tools, Cart3D , and SolidMesh. Export in STL format is...exports the CAD geometry to commonly used grid generation tools such as Chimera Grid Tools and Cart3D and SolidMesh. Export in STL format is also

  7. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) techniques were applied to the Launch Abort System (LAS) of the NASA Crew Exploration Vehicle (CEV) parametric geometry Computational Fluid Dynamics (CFD) study to efficiently identify and rank the primary contributors to the integrated drag over the vehicles ascent trajectory. Typical approaches to these types of activities involve developing all possible combinations of geometries changing one variable at a time, analyzing them with CFD, and predicting the main effects on an aerodynamic parameter, which in this application is integrated drag. The original plan for the LAS study team was to generate and analyze more than1000 geometry configurations to study 7 geometric parameters. By utilizing DOE techniques the number of geometries was strategically reduced to 84. In addition, critical information on interaction effects among the geometric factors were identified that would not have been possible with the traditional technique. Therefore, the study was performed in less time and provided more information on the geometric main effects and interactions impacting drag generated by the LAS. This paper discusses the methods utilized to develop the experimental design, execution, and data analysis.

  8. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    ). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the

  9. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation

    NASA Astrophysics Data System (ADS)

    Yang, Yidong; Armour, Michael; Kang-Hsin Wang, Ken; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (‘tubular’ geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (‘pancake’ geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry

  10. Evaluation of a Cone Beam Computed Tomography Geometry for Image Guided Small Animal Irradiation

    PubMed Central

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-01-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (“tubular” geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (“pancake” geometry). The small animal radiation research platform (SARRP) developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Notwithstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e., pancake and tubular geometry

  11. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.

    PubMed

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-07

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively.

  12. Automatic computation of pebble roundness using digital imagery and discrete geometry

    NASA Astrophysics Data System (ADS)

    Roussillon, Tristan; Piégay, Hervé; Sivignon, Isabelle; Tougne, Laure; Lavigne, Franck

    2009-10-01

    The shape of sedimentary particles is an important property, from which geographical hypotheses related to abrasion, distance of transport, river behavior, etc. can be formulated. In this paper, we use digital image analysis, especially discrete geometry, to automatically compute some shape parameters such as roundness, i.e. a measure of how much the corners and edges of a particle have been worn away. In contrast to previous work in which traditional digital images analysis techniques, such as Fourier transform, are used, we opted for a discrete geometry approach that allowed us to implement Wadell's original index, which is known to be more accurate, but more time consuming to implement in the field. Our implementation of Wadell's original index is highly correlated (92%) with the roundness classes of Krumbein's chart, used as a ground-truth. In addition, we show that other geometrical parameters, which are easier to compute, can be used to provide good approximations of roundness. We also used our shape parameters to study a set of pebbles digital images taken from the Progo basin river network (Indonesia). The results we obtained are in agreement with previous work and open new possibilities for geomorphologists thanks to automatic computation.

  13. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    DTIC Science & Technology

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  14. Experimental demonstration of novel imaging geometries for x-ray fluorescence computed tomography

    PubMed Central

    Fu, Geng; Meng, Ling-Jian; Eng, Peter; Newville, Matt; Vargas, Phillip; Riviere, Patrick La

    2013-01-01

    Purpose: X-ray fluorescence computed tomography (XFCT) is an emerging imaging modality that maps the three-dimensional distribution of elements, generally metals, in ex vivo specimens and potentially in living animals and humans. At present, it is generally performed at synchrotrons, taking advantage of the high flux of monochromatic x rays, but recent work has demonstrated the feasibility of using laboratory-based x-ray tube sources. In this paper, the authors report the development and experimental implementation of two novel imaging geometries for mapping of trace metals in biological samples with ∼50–500 μm spatial resolution. Methods: One of the new imaging approaches involves illuminating and scanning a single slice of the object and imaging each slice's x-ray fluorescent emissions using a position-sensitive detector and a pinhole collimator. The other involves illuminating a single line through the object and imaging the emissions using a position-sensitive detector and a slit collimator. They have implemented both of these using synchrotron radiation at the Advanced Photon Source. Results: The authors show that it is possible to achieve 250 eV energy resolution using an electron multiplying CCD operating in a quasiphoton-counting mode. Doing so allowed them to generate elemental images using both of the novel geometries for imaging of phantoms and, for the second geometry, an osmium-stained zebrafish. Conclusions: The authors have demonstrated the feasibility of these two novel approaches to XFCT imaging. While they use synchrotron radiation in this demonstration, the geometries could readily be translated to laboratory systems based on tube sources. PMID:23718594

  15. Kinematics and computation of workspace for adaptive geometry structures

    NASA Astrophysics Data System (ADS)

    Pourki, Forouza; Sosa, Horacio

    1993-09-01

    A new feature in the design of smart structures is the capability of the structure to respond autonomously to undesirable phenomena and environment. This capability is often synonymous to the requirement that the structure should assume a set of different geometric shapes or adapt to a set of kinematic constraints to accomplish a maneuver. Systems with these characteristics have been referred to as `shape adaptive' or `variable geometry' structures. The present paper introduces a basis for the kinematics and work space studies of statically deterministic truss structures which are shape adaptive. The difference between these structures and the traditional truss structures, which are merely built to support the weight and may be modelled by finite element methods, is the fact that these variable geometry structures allow for large (and nonlinear) deformations. On the other hand, these structures unlike structures composed of well investigated `four bar mechanisms,' are statically deterministic.

  16. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  17. Molecular tailoring approach for geometry optimization of large molecules: energy evaluation and parallelization strategies.

    PubMed

    Ganesh, V; Dongare, Rameshwar K; Balanarayan, P; Gadre, Shridhar R

    2006-09-14

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including alpha-tocopherol, taxol, gamma-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  18. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  19. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  20. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  1. Computer-Generated Geometry Instruction: A Preliminary Study

    ERIC Educational Resources Information Center

    Kang, Helen W.; Zentall, Sydney S.

    2011-01-01

    This study hypothesized that increased intensity of graphic information, presented in computer-generated instruction, could be differentially beneficial for students with hyperactivity and inattention by improving their ability to sustain attention and hold information in-mind. To this purpose, 18 2nd-4th grade students, recruited from general…

  2. MHRDRing Z-Pinches and Related Geometries: Four Decades of Computational Modeling Using Still Unconventional Methods

    SciTech Connect

    Lindemuth, Irvin R.

    2009-01-21

    For approximately four decades, Z-pinches and related geometries have been computationally modeled using unique Alternating Direction Implicit (ADI) numerical methods. Computational results have provided illuminating and often provocative interpretations of experimental results. A number of past and continuing applications are reviewed and discussed.

  3. Application of Computer Axial Tomography (CAT) to measuring crop canopy geometry. [corn and soybeans

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Vanderbilt, V. C. (Principal Investigator); Kilgore, R. W.

    1981-01-01

    The feasibility of using the principles of computer axial topography (CAT) to quantify the structure of crop canopies was investigated because six variables are needed to describe the position-orientation with time of a small piece of canopy foliage. Several cross sections were cut through the foliage of healthy, green corn and soybean canopies in the dent and full pod development stages, respectively. A photograph of each cross section representing the intersection of a plane with the foliage was enlarged and the air-foliage boundaries delineated by the plane were digitized. A computer program was written and used to reconstruct the cross section of the canopy. The approach used in applying optical computer axial tomography to measuring crop canopy geometry shows promise of being able to provide needed geometric information for input data to canopy reflectance models. The difficulty of using the CAT scanner to measure large canopies of crops like corn is discussed and a solution is proposed involving the measurement of plants one at a time.

  4. Potts models with magnetic field: Arithmetic, geometry, and computation

    NASA Astrophysics Data System (ADS)

    Dasu, Shival; Marcolli, Matilde

    2015-11-01

    We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.

  5. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  6. The flux-coordinate independent approach applied to X-point geometries

    SciTech Connect

    Hariri, F. Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-08-15

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries.

  7. Virtual photons in imaginary time: Computing Casimir forces in new geometries

    NASA Astrophysics Data System (ADS)

    Johnson, Steven G.

    2009-03-01

    One of the most dramatic manifestations of the quantum nature of light in the past half-century has been the Casimir force: a force between neutral objects at close separations caused by quantum vacuum fluctuations in the electromagnetic fields. In classical photonics, wavelength-scale structures can be designed to dramatically alter the behavior of light, so it is natural to consider whether analogous geometry-based effects occur for Casimir forces. However, this problem turns out to be surprisingly difficult for all but the simplest planar geometries. (The deceptively simple case of an infinite plate and infinite cylinder, for perfect metals, was first solved in 2006.) Many formulations of the Casimir force, indeed, correspond to impossibly hard numerical problems. We will describe how the availability of large-scale computing resources in NSF's Teragrid, combined with reformulations of the Casimir-force problem oriented towards numerical computation, are enabling the exploration of Casimir forces in new regimes of geometry and materials.

  8. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.

    2015-02-01

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  9. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  10. Slant Path Distances Through Cells in Cylindrical Geometry and an Application to the Computation of Isophotes

    SciTech Connect

    Rodney Whitaker Eugene Symbalisty

    2007-12-17

    In computer programs involving two-dimensional cylindrical geometry, it is often necessary to calculate the slant path distance in a given direction from a point to the boundary of a mesh cell. A subroutine, HOWFAR, has been written that accomplishes this, and is very economical in computer time. An example of its use is given in constructing the isophotes for a low altitude nuclear fireball.

  11. Identifying Critical Learner Traits in a Dynamic Computer-Based Geometry Program.

    ERIC Educational Resources Information Center

    Hannafin, Robert D.; Scott, Barry N.

    1998-01-01

    Investigated the effects of student working-memory capacity, preference for amount of instruction, spatial problem-solving ability, and school mathematics grades on eighth graders' recall of factual information and conceptual understanding. Pairs of students worked through 16 activities using a dynamic, computer-based geometry program. Presents…

  12. Comparative Effects of Two Modes of Computer-Assisted Instructional Package on Solid Geometry Achievement

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Ezenwa, Victoria Ifeoma; Anyanwu, Romanus Chogozie

    2014-01-01

    The study examined the effects of two modes of computer-assisted instructional package on solid geometry achievement amongst senior secondary school students in Minna, Niger State, Nigeria. Also, the influence of gender on the performance of students exposed to CAI(AT) and CAI(AN) packages were examined. This study adopted a pretest-posttest…

  13. Computational geometry for patient-specific reconstruction and meshing of blood vessels from MR and CT angiography.

    PubMed

    Antiga, Luca; Ene-Iordache, Bogdan; Remuzzi, Andrea

    2003-05-01

    Investigation of three-dimensional (3-D) geometry and fluid-dynamics in human arteries is an important issue in vascular disease characterization and assessment. Thanks to recent advances in magnetic resonance (MR) and computed tomography (CT), it is now possible to address the problem of patient-specific modeling of blood vessels, in order to take into account interindividual anatomic variability of vasculature. Generation of models suitable for computational fluid dynamics is still commonly performed by semiautomatic procedures, in general based on operator-dependent tasks, which cannot be easily extended to a significant number of clinical cases. In this paper, we overcome these limitations making use of computational geometry techniques. In particular, 3-D modeling was carried out by means of 3-D level sets approach. Model editing was also implemented ensuring harmonic mean curvature vectors distribution on the surface, and model geometric analysis was performed with a novel approach, based on solving Eikonal equation on Voronoi diagram. This approach provides calculation of central paths, maximum inscribed sphere estimation and geometric characterization of the surface. Generation of adaptive-thickness boundary layer finite elements is finally presented. The use of the techniques presented here makes it possible to introduce patient-specific modeling of blood vessels at clinical level.

  14. Multivariate geometry as an approach to algal community analysis

    USGS Publications Warehouse

    Allen, T.F.H.; Skagen, S.

    1973-01-01

    Multivariate analyses are put in the context of more usual approaches to phycological investigations. The intuitive common-sense involved in methods of ordination, classification and discrimination are emphasised by simple geometric accounts which avoid jargon and matrix algebra. Warnings are given that artifacts result from technique abuses by the naive or over-enthusiastic. An analysis of a simple periphyton data set is presented as an example of the approach. Suggestions are made as to situations in phycological investigations, where the techniques could be appropriate. The discipline is reprimanded for its neglect of the multivariate approach.

  15. Comparative study of auxetic geometries by means of computer-aided design and engineering

    NASA Astrophysics Data System (ADS)

    Álvarez Elipe, Juan Carlos; Díaz Lantada, Andrés

    2012-10-01

    Auxetic materials (or metamaterials) are those with a negative Poisson ratio (NPR) and display the unexpected property of lateral expansion when stretched, as well as an equal and opposing densification when compressed. Such geometries are being progressively employed in the development of novel products, especially in the fields of intelligent expandable actuators, shape morphing structures and minimally invasive implantable devices. Although several auxetic and potentially auxetic geometries have been summarized in previous reviews and research, precise information regarding relevant properties for design tasks is not always provided. In this study we present a comparative study of two-dimensional and three-dimensional auxetic geometries carried out by means of computer-aided design and engineering tools (from now on CAD-CAE). The first part of the study is focused on the development of a CAD library of auxetics. Once the library is developed we simulate the behavior of the different auxetic geometries and elaborate a systematic comparison, considering relevant properties of these geometries, such as Poisson ratio(s), maximum volume or area reductions attainable and equivalent Young’s modulus, hoping it may provide useful information for future designs of devices based on these interesting structures.

  16. Computation of Transverse Injection Into Supersonic Crossflow With Various Injector Orifice Geometries

    NASA Technical Reports Server (NTRS)

    Foster, Lancert; Engblom, William A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of various injector geometries employed in transverse injection into a non-reacting Mach 1.2 flow. 3-D Reynolds-Averaged Navier Stokes (RANS) results are obtained for the various injector geometries using the Wind code with the Mentor s Shear Stress Transport turbulence model in both single and multi-species modes. Computed results for the injector mixing, penetration, and induced wall forces are presented. In the case of rectangular injectors, those longer in the direction of the freestream flow are predicted to generate the most mixing and penetration of the injector flow into the primary stream. These injectors are also predicted to provide the largest discharge coefficients and induced wall forces. Minor performance differences are indicated among diamond, circle, and square orifices. Grid sensitivity study results are presented which indicate consistent qualitative trends in the injector performance comparisons with increasing grid fineness.

  17. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  18. Computer Algebra, Instrumentation and the Anthropological Approach

    ERIC Educational Resources Information Center

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  19. Computer-Based Training: An Institutional Approach.

    ERIC Educational Resources Information Center

    Barker, Philip; Manji, Karim

    1992-01-01

    Discussion of issues related to computer-assisted learning (CAL) and computer-based training (CBT) describes approaches to electronic learning; principles underlying courseware development to support these approaches; and a plan for creation of a CAL/CBT development center, including its functional role, campus services, staffing, and equipment…

  20. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  1. A new approach for turbulent simulations in complex geometries

    NASA Astrophysics Data System (ADS)

    Israel, Daniel M.

    Historically turbulence modeling has been sharply divided into Reynolds averaged Navier-Stokes (RANS), in which all the turbulent scales of motion are modeled, and large-eddy simulation (LES), in which only a portion of the turbulent spectrum is modeled. In recent years there have been numerous attempts to couple these two approaches either by patching RANS and LES calculations together (zonal methods) or by blending the two sets of equations. In order to create a proper bridging model, that is, a single set of equations which captures both RANS and LES like behavior, it is necessary to place both RANS and LES in a more general framework. The goal of the current work is threefold: to provide such a framework, to demonstrate how the Flow Simulation Methodology (FSM) fits into this framework, and to evaluate the strengths and weaknesses of the current version of the FSM. To do this, first a set of filtered Navier-Stokes (FNS) equations are introduced in terms of an arbitrary generalized filter. Additional exact equations are given for the second order moments and the generalized subfilter dissipation rate tensor. This is followed by a discussion of the role of implicit and explicit filters in turbulence modeling. The FSM is then described with particular attention to its role as a bridging model. In order to evaluate the method a specific implementation of the FSM approach is proposed. Simulations are presented using this model for the case of a separating flow over a "hump" with and without flow control. Careful attention is paid to error estimation, and, in particular, how using flow statistics and time series affects the error analysis. Both mean flow and Reynolds stress profiles are presented, as well as the phase averaged turbulent structures and wall pressure spectra. Using the phase averaged data it is possible to examine how the FSM partitions the energy between the coherent resolved scale motions, the random resolved scale fluctuations, and the subfilter

  2. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  3. A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2001-01-01

    This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.

  4. Computational approaches to motor control.

    PubMed

    Flash, T; Sejnowski, T J

    2001-12-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors.

  5. Modelling Mathematics Teachers' Intention to Use the Dynamic Geometry Environments in Macau: An SEM Approach

    ERIC Educational Resources Information Center

    Zhou, Mingming; Chan, Kan Kan; Teo, Timothy

    2016-01-01

    Dynamic geometry environments (DGEs) provide computer-based environments to construct and manipulate geometric figures with great ease. Research has shown that DGEs has positive impact on student motivation, engagement, and achievement in mathematics learning. However, the adoption of DGEs by mathematics teachers varies substantially worldwide.…

  6. Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations Around Iced Airfoils and Wings

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)

    2002-01-01

    Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.

  7. Hydrodynamic optimization of membrane bioreactor by horizontal geometry modification using computational fluid dynamics.

    PubMed

    Yan, Xiaoxu; Wu, Qing; Sun, Jianyu; Liang, Peng; Zhang, Xiaoyuan; Xiao, Kang; Huang, Xia

    2016-01-01

    Geometry property would affect the hydrodynamics of membrane bioreactor (MBR), which was directly related to membrane fouling rate. The simulation of a bench-scale MBR by computational fluid dynamics (CFD) showed that the shear stress on membrane surface could be elevated by 74% if the membrane was sandwiched between two baffles (baffled MBR), compared with that without baffles (unbaffled MBR). The effects of horizontal geometry characteristics of a bench-scale membrane tank were discussed (riser length index Lr, downcomer length index Ld, tank width index Wt). Simulation results indicated that the average cross flow of the riser was negatively correlated to the ratio of riser and downcomer cross-sectional area. A relatively small tank width would also be preferable in promoting shear stress on membrane surface. The optimized MBR had a shear elevation of 21.3-91.4% compared with unbaffled MBR under same aeration intensity.

  8. Computational studies of flow through cross flow fans - effect of blade geometry

    NASA Astrophysics Data System (ADS)

    Govardhan, M.; Sampat, D. Lakshmana

    2005-09-01

    This present paper describes three dimensional computational analysis of complex internal flow in a cross flow fan. A commercial computational fluid dynamics (CFD) software code CFX was used for the computation. RNG k-ɛ two equation turbulence model was used to simulate the model with unstructured mesh. Sliding mesh interface was used at the interface between the rotating and stationary domains to capture the unsteady interactions. An accurate assessment of the present investigation is made by comparing various parameters with the available experimental data. Three impeller geometries with different blade angles and radius ratio are used in the present study. Maximum energy transfer through the impeller takes place in the region where the flow follows the blade curvature. Radial velocity is not uniform through blade channels. Some blades work in turbine mode at very low flow coefficients. Static pressure is always negative in and around the impeller region.

  9. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  10. Dependence of Monte Carlo microdosimetric computations on the simulation geometry of gold nanoparticles.

    PubMed

    Zygmanski, Piotr; Liu, Bo; Tsiamas, Panagiotis; Cifter, Fulya; Petersheim, Markus; Hesser, Jürgen; Sajo, Erno

    2013-11-21

    Recently, interactions of x-rays with gold nanoparticles (GNPs) and the resulting dose enhancement have been studied using several Monte Carlo (MC) codes (Jones et al 2010 Med. Phys. 37 3809-16, Lechtman et al 2011 Phys. Med. Biol. 56 4631-47, McMahon et al 2011 Sci. Rep. 1 1-9, Leung et al 2011 Med. Phys. 38 624-31). These MC simulations were carried out in simplified geometries and provided encouraging preliminary data in support of GNP radiotherapy. As these studies showed, radiation transport computations of clinical beams to obtain dose enhancement from nanoparticles has several challenges, mostly arising from the requirement of high spatial resolution and from the approximations used at the interface between the macroscopic clinical beam transport and the nanoscopic electron transport originating in the nanoparticle or its vicinity. We investigate the impact of MC simulation geometry on the energy deposition due to the presence of GNPs, including the effects of particle clustering and morphology. Dose enhancement due to a single and multiple GNPs using various simulation geometries is computed using GEANT4 MC radiation transport code. Various approximations in the geometry and in the phase space transition from macro- to micro-beams incident on GNPs are analyzed. Simulations using GEANT4 are compared to a deterministic code CEPXS/ONEDANT for microscopic (nm-µm) geometry. Dependence on the following microscopic (µ) geometry parameters is investigated: µ-source-to-GNP distance (µSAD), µ-beam size (µS), and GNP size (µC). Because a micro-beam represents clinical beam properties at the microscopic scale, the effect of using different types of micro-beams is also investigated. In particular, a micro-beam with the phase space of a clinical beam versus a plane-parallel beam with an equivalent photon spectrum is characterized. Furthermore, the spatial anisotropy of energy deposition around a nanoparticle is analyzed. Finally, dependence of dose enhancement

  11. A Museum Approach to Computer Learning.

    ERIC Educational Resources Information Center

    Wall, Roger

    1986-01-01

    Compares and contrasts the approaches taken by museums and schools in computer education. Reviews representative museum computer programs as "Keyboards for Kids" for preschool children of the Franklin Institute in Philadelphia and the teacher training project of Boston's Museum of Science. Offers perspectives on expanded program options.…

  12. Assessment and improvement of mapping algorithms for non-matching meshes and geometries in computational FSI

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe

    2016-05-01

    This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.

  13. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  14. Answering Typical Student Questions in Hyperbolic Geometry: A Transformation Group Approach

    ERIC Educational Resources Information Center

    Reyes, Edgar N.; Gray, Elizabeth D.

    2002-01-01

    It is shown that the bisector of a segment of a geodesic and the bisector of an angle in hyperbolic geometry can be expressed in terms of points which are equidistant from the end points of the segment, and points that are equidistant from the rays of the angle, respectively. An important tool in the approach is that the shortest distance between…

  15. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  16. Computational analysis of two-fluid edge plasma stability in tokamak geometries

    NASA Astrophysics Data System (ADS)

    Neiser, Tom; Baver, Derek; Carter, Troy; Myra, Jim; Snyder, Phil; Umansky, Maxim

    2013-10-01

    In H-mode, the edge pressure gradient is disrupted quasi-periodically by Edge Localized Modes (ELMs), which leads to confinement loss and places large heat loads on the divertor. This poster gives an overview of the peeling-ballooning model for ELM formation and presents recent results of 2DX, a fast eigenvalue code capable of solving equations of any fluid model. We use 2DX to solve reduced ideal MHD equations of two-fluid plasma in the R-Z plane, with toroidal mode number resolving the third dimension. Previously, 2DX has been successfully benchmarked against ELITE and BOUT + + for ballooning dominated cases in simple shifted circle geometries. We present follow-up work in simple geometry as well as similar benchmarks for full X-point geometry of DIII-D. We demonstrate 2DX's capability as computational tool that supports nonlinear codes with linear verification and as experimental tool to identify density limits, map the spatial distribution of eigenmodes and investigate marginal stability of the edge region.

  17. A computer program for fitting smooth surfaces to an aircraft configuration and other three dimensional geometries

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1975-01-01

    A computer program that uses a three-dimensional geometric technique for fitting a smooth surface to the component parts of an aircraft configuration is presented. The resulting surface equations are useful in performing various kinds of calculations in which a three-dimensional mathematical description is necessary. Programs options may be used to compute information for three-view and orthographic projections of the configuration as well as cross-section plots at any orientation through the configuration. The aircraft geometry input section of the program may be easily replaced with a surface point description in a different form so that the program could be of use for any three-dimensional surface equations.

  18. Laser cone beam computed tomography scanner geometry for large volume 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Jordan, K. J.; Turnbull, D.; Batista, J. J.

    2013-06-01

    A new scanner geometry for fast optical cone-beam computed tomography is reported. The system consists of a low power laser beam, raster scanned, under computer control, through a transparent object in a refractive index matching aquarium. The transmitted beam is scattered from a diffuser screen and detected by a photomultiplier tube. Modest stray light is present in the projection images since only a single ray is present in the object during measurement and there is no imaging optics to introduce further stray light in the form of glare. A scan time of 30 minutes was required for 512 projections with a field of view of 12 × 18 cm. Initial performance from scanning a 15 cm diameter jar with black solutions is presented. Averaged reconstruction coefficients are within 2% along the height of the jar and within the central 85% of diameter, due to the index mismatch of the jar. Agreement with spectrometer measurements was better than 0.5% for a minimum transmission of 4% and within 4% for a dark, 0.1% transmission sample. This geometry's advantages include high dynamic range and low cost of scaling to larger (>15 cm) fields of view.

  19. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  20. Computational approach for probing the flow through artificial heart devices.

    PubMed

    Kiris, C; Kwak, D; Rogers, S; Chang, I D

    1997-11-01

    Computational fluid dynamics (CFD) has become an indispensable part of aerospace research and design. The solution procedure for incompressible Navier-Stokes equations can be used for biofluid mechanics research. The computational approach provides detailed knowledge of the flowfield complementary to that obtained by experimental measurements. This paper illustrates the extension of CFD techniques to artificial heart flow simulation. Unsteady incompressible Navier-Stokes equations written in three-dimensional generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudocompressibility approach. It uses an implicit upwind-differencing scheme together with the Gauss-Seidel line-relaxation method. The efficiency and robustness of the time-accurate formulation of the numerical algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated by experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapped grid embedding scheme are employed, respectively. Steady-state solutions for the flow through a tilting-disk heart valve are compared with experimental measurements. Good agreement is obtained. Aided by experimental data, the flow through an entire Penn State artificial heart model is computed.

  1. [Geometry, analysis, and computation in mathematics and applied science]. Progress report

    SciTech Connect

    Hoffman, D.

    1994-02-01

    The principal investigators` work on a variety of pure and applied problems in Differential Geometry, Calculus of Variations and Mathematical Physics has been done in a computational laboratory and been based on interactive scientific computer graphics and high speed computation created by the principal investigators to study geometric interface problems in the physical sciences. We have developed software to simulate various physical phenomena from constrained plasma flow to the electron microscope imaging of the microstructure of compound materials, techniques for the visualization of geometric structures that has been used to make significant breakthroughs in the global theory of minimal surfaces, and graphics tools to study evolution processes, such as flow by mean curvature, while simultaneously developing the mathematical foundation of the subject. An increasingly important activity of the laboratory is to extend this environment in order to support and enhance scientific collaboration with researchers at other locations. Toward this end, the Center developed the GANGVideo distributed video software system and software methods for running lab-developed programs simultaneously on remote and local machines. Further, the Center operates a broadcast video network, running in parallel with the Center`s data networks, over which researchers can access stored video materials or view ongoing computations. The graphical front-end to GANGVideo can be used to make ``multi-media mail`` from both ``live`` computing sessions and stored materials without video editing. Currently, videotape is used as the delivery medium, but GANGVideo is compatible with future ``all-digital`` distribution systems. Thus as a byproduct of mathematical research, we are developing methods for scientific communication. But, most important, our research focuses on important scientific problems; the parallel development of computational and graphical tools is driven by scientific needs.

  2. Quantifying normal geometric variation in human pulmonary lobar geometry from high resolution computed tomography.

    PubMed

    Chan, Ho-Fung; Clark, Alys R; Hoffman, Eric A; Malcolm, Duane T K; Tawhai, Merryn H

    2015-05-01

    Previous studies of the ex vivo lung have suggested significant intersubject variability in lung lobe geometry. A quantitative description of normal lung lobe shape would therefore have value in improving the discrimination between normal population variability in shape and pathology. To quantify normal human lobe shape variability, a principal component analysis (PCA) was performed on high resolution computed tomography (HRCT) imaging of the lung at full inspiration. Volumetric imaging from 22 never-smoking subjects (10 female and 12 male) with normal lung function was included in the analysis. For each subject, an initial finite element mesh geometry was generated from a group of manually selected nodes that were placed at distinct anatomical locations on the lung surface. Each mesh used cubic shape functions to describe the surface curvilinearity, and the mesh was fitted to surface data for each lobe. A PCA was performed on the surface meshes for each lobe. Nine principal components (PCs) were sufficient to capture >90% of the normal variation in each of the five lobes. The analysis shows that lobe size can explain between 20% and 50% of intersubject variability, depending on the lobe considered. Diaphragm shape was the next most significant intersubject difference. When the influence of lung size difference is removed, the angle of the fissures becomes the most significant shape difference, and the variability in relative lobe size becomes important. We also show how a lobe from an independent subject can be projected onto the study population's PCs, demonstrating potential for abnormalities in lobar geometry to be defined in a quantitative manner.

  3. Computation of inviscid compressible flows about arbitrary geometries and moving boundaries

    NASA Astrophysics Data System (ADS)

    Bayyuk, Sami Alan

    2008-10-01

    The computational simulation of aerodynamic flows with moving boundaries has numerous scientific and practical motivations. In this work, a new technique for computation of inviscid, compressible flows about two-dimensional, arbitrarily-complex geometries that are allowed to undergo arbitrarily-complex motions or deformations is developed and studied. The computational technique is constructed from five main components: (i) an adaptive, Quadtree-based, Cartesian-Grid generation algorithm that divides the computational region into stationary square cells, with local refinement and coarsening to resolve the geometry of all internal boundaries, even as such boundaries move. The algorithm automatically clips cells that straddle boundaries to form arbitrary polygonal cells; (ii) a representation of internal boundaries as exact, infinitesimally-thin discontinuities separating two arbitrarily-different states. The exactness of this representation, and its preclusion of diffusive or dispersive effects while boundaries travel across the grid combines the advantages of Eulerian and Lagrangian methods and is the main distinguishing characteristic of the technique; (iii) a second-order-accurate Finite-Volume, Arbitrary Lagrangian-Eulerian, characteristic-based flow-solver. The discretization of the boundaries and their motion is matched with the discretization of the flux quadratures to ensure that the overall second-order-accurate discretization also satisfies The Geometric Conservation Laws; (iv) an algorithm for dynamic merging of the cells in the vicinity of internal boundaries to form composite cells that retain the same topologic configuration during individual boundary motion steps and can therefore be treated as deforming cells, eliminating the need to treat crossing of grid lines by moving boundaries. Cell merging is also used to circumvent the "small-cell problem" of non-boundary-conformal Cartesian Grids; and (v) a solution-adaptation algorithm for resolving flow

  4. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  5. Grid generation and inviscid flow computation about cranked-winged airplane geometries

    NASA Technical Reports Server (NTRS)

    Eriksson, L.-E.; Smith, R. E.; Wiese, M. R.; Farr, N.

    1987-01-01

    An algebraic grid generation procedure that defines a patched multiple-block grid system suitable for fighter-type aircraft geometries with fuselage and engine inlet, canard or horizontal tail, cranked delta wing and vertical fin has been developed. The grid generation is based on transfinite interpolation and requires little computational power. A finite-volume Euler solver using explicit Runge-Kutta time-stepping has been adapted to this grid system and implemented on the VPS-32 vector processor with a high degree of vectorization. Grids are presented for an experimental aircraft with fuselage, canard, 70-20-cranked wing, and vertical fin. Computed inviscid compressible flow solutions are presented for Mach 2 at 3.79, 7 and 10 deg angles of attack. Conmparisons of the 3.79 deg computed solutions are made with available full-potential flow and Euler flow solutions on the same configuration but with another grid system. The occurrence of an unsteady solution in the 10 deg angle of attack case is discussed.

  6. NASA geometry data exchange specification for computational fluid dynamics (NASA IGES)

    NASA Technical Reports Server (NTRS)

    Blake, Matthew W.; Kerr, Patricia A.; Thorp, Scott A.; Jou, Jin J.

    1994-01-01

    This document specifies a subset of an existing product data exchange specification that is widely used in industry and government. The existing document is called the Initial Graphics Exchange Specification. This document, a subset of IGES, is intended for engineers analyzing product performance using tools such as computational fluid dynamics (CFD) software. This document specifies how to define mathematically and exchange the geometric model of an object. The geometry is represented utilizing nonuniform rational B-splines (NURBS) curves and surfaces. Only surface models are represented; no solid model representation is included. This specification does not include most of the other types of product information available in IGES (e.g., no material properties or surface finish properties) and does not provide all the specific file format details of IGES. The data exchange protocol specified in this document is fully conforming to the American National Standard (ANSI) IGES 5.2.

  7. A computational geometry framework for the optimisation of atom probe reconstructions.

    PubMed

    Felfer, Peter; Cairney, Julie

    2016-10-01

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a 'master curve' and correct for overall shape deviations in the data.

  8. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  9. New Protocols for Solving Geometric Calculation Problems Incorporating Dynamic Geometry and Computer Algebra Software.

    ERIC Educational Resources Information Center

    Schumann, Heinz; Green, David

    2000-01-01

    Discusses software for geometric construction, measurement, and calculation, and software for numerical calculation and symbolic analysis that allows for new approaches to the solution of geometric problems. Illustrates these computer-aided graphical, numerical, and algebraic methods of solution and discusses examples using the appropriate choice…

  10. Examination of the three-dimensional geometry of cetacean flukes using computed tomography scans: hydrodynamic implications.

    PubMed

    Fish, Frank E; Beneski, John T; Ketten, Darlene R

    2007-06-01

    The flukes of cetaceans function in the hydrodynamic generation of forces for thrust, stability, and maneuverability. The three-dimensional geometry of flukes is associated with production of lift and drag. Data on fluke geometry were collected from 19 cetacean specimens representing eight odontocete genera (Delphinus, Globicephala, Grampus, Kogia, Lagenorhynchus, Phocoena, Stenella, Tursiops). Flukes were imaged as 1 mm thickness cross-sections using X-ray computer-assisted tomography. Fluke shapes were characterized quantitatively by dimensions of the chord, maximum thickness, and position of maximum thickness from the leading edge. Sections were symmetrical about the chordline and had a rounded leading edge and highly tapered trailing edge. The thickness ratio (maximum thickness/chord) among species increased from insertion on the tailstock to a maximum at 20% of span and then decreasing steadily to the tip. Thickness ratio ranged from 0.139 to 0.232. These low values indicate reduced drag while moving at high speed. The position of maximum thickness from the leading edge remained constant over the fluke span at an average for all species of 0.285 chord. The displacement of the maximum thickness reduces the tendency of the flow to separate from the fluke surface, potentially affecting stall patterns. Similarly, the relatively large leading edge radius allows greater lift generation and delays stall. Computational analysis of fluke profiles at 50% of span showed that flukes were generally comparable or better for lift generation than engineered foils. Tursiops had the highest lift coefficients, which were superior to engineered foils by 12-19%. Variation in the structure of cetacean flukes reflects different hydrodynamic characteristics that could influence swimming performance.

  11. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  12. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  13. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    NASA Astrophysics Data System (ADS)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  14. A Creative Arts Approach to Computer Programming.

    ERIC Educational Resources Information Center

    Greenberg, Gary

    1991-01-01

    Discusses "Object LOGO," a symbolic computer programing language for use in the creative arts. Describes the use of the program in approaching arts projects from textual, graphic, and musical perspectives. Suggests that use of the program can promote development of creative skills and humanities learning in general. (SG)

  15. Two-phase flow in complex geometries: A diffuse domain approach

    PubMed Central

    Aland, S.; Voigt, A.

    2011-01-01

    We present a new method for simulating two-phase flows in complex geometries, taking into account contact lines separating immiscible incompressible components. We combine the diffuse domain method for solving PDEs in complex geometries with the diffuse-interface (phase-field) method for simulating multiphase flows. In this approach, the complex geometry is described implicitly by introducing a new phase-field variable, which is a smooth approximation of the characteristic function of the complex domain. The fluid and component concentration equations are reformulated and solved in larger regular domain with the boundary conditions being implicitly modeled using source terms. The method is straightforward to implement using standard software packages; we use adaptive finite elements here. We present numerical examples demonstrating the effectiveness of the algorithm. We simulate multiphase flow in a driven cavity on an extended domain and find very good agreement with results obtained by solving the equations and boundary conditions in the original domain. We then consider successively more complex geometries and simulate a droplet sliding down a rippled ramp in 2D and 3D, a droplet flowing through a Y-junction in a microfluidic network and finally chaotic mixing in a droplet flowing through a winding, serpentine channel. The latter example actually incorporates two different diffuse domains: one describes the evolving droplet where mixing occurs while the other describes the channel. PMID:21918638

  16. Minimal curvature trajectories: Riemannian geometry concepts for slow manifold computation in chemical kinetics

    NASA Astrophysics Data System (ADS)

    Lebiedz, Dirk; Reinhardt, Volkmar; Siehr, Jochen

    2010-09-01

    In dissipative ordinary differential equation systems different time scales cause anisotropic phase volume contraction along solution trajectories. Model reduction methods exploit this for simplifying chemical kinetics via a time scale separation into fast and slow modes. The aim is to approximate the system dynamics with a dimension-reduced model after eliminating the fast modes by enslaving them to the slow ones via computation of a slow attracting manifold. We present a novel method for computing approximations of such manifolds using trajectory-based optimization. We discuss Riemannian geometry concepts as a basis for suitable optimization criteria characterizing trajectories near slow attracting manifolds and thus provide insight into fundamental geometric properties of multiple time scale chemical kinetics. The optimization criteria correspond to a suitable mathematical formulation of "minimal relaxation" of chemical forces along reaction trajectories under given constraints. We present various geometrically motivated criteria and the results of their application to four test case reaction mechanisms serving as examples. We demonstrate that accurate numerical approximations of slow invariant manifolds can be obtained.

  17. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  18. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  19. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  20. A fully-coupled upwind discontinuous Galerkin method for incompressible porous media flows: High-order computations of viscous fingering instabilities in complex geometry

    NASA Astrophysics Data System (ADS)

    Scovazzi, G.; Huang, H.; Collis, S. S.; Yin, J.

    2013-11-01

    We present a new approach to the simulation of viscous fingering instabilities in incompressible, miscible displacement flows in porous media. In the past, high resolution computational simulations of viscous fingering instabilities have always been performed using high-order finite difference or Fourier-spectral methods which do not posses the flexibility to compute very complex subsurface geometries. Our approach, instead, by means of a fully-coupled nonlinear implementation of the discontinuous Galerkin method, possesses a fundamental differentiating feature, in that it maintains high-order accuracy on fully unstructured meshes. In addition, the proposed method shows very low sensitivity to mesh orientation, in contrast with classical finite volume approximation used in porous media flow simulations. The robustness and accuracy of the method are demonstrated in a number of challenging computational problems.

  1. Computational Approaches to Nucleic Acid Origami.

    PubMed

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  2. Computational approach to compact Riemann surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2017-01-01

    A purely numerical approach to compact Riemann surfaces starting from plane algebraic curves is presented. The critical points of the algebraic curve are computed via a two-dimensional Newton iteration. The starting values for this iteration are obtained from the resultants with respect to both coordinates of the algebraic curve and a suitable pairing of their zeros. A set of generators of the fundamental group for the complement of these critical points in the complex plane is constructed from circles around these points and connecting lines obtained from a minimal spanning tree. The monodromies are computed by solving the defining equation of the algebraic curve on collocation points along these contours and by analytically continuing the roots. The collocation points are chosen to correspond to Chebychev collocation points for an ensuing Clenshaw-Curtis integration of the holomorphic differentials which gives the periods of the Riemann surface with spectral accuracy. At the singularities of the algebraic curve, Puiseux expansions computed by contour integration on the circles around the singularities are used to identify the holomorphic differentials. The Abel map is also computed with the Clenshaw-Curtis algorithm and contour integrals. As an application of the code, solutions to the Kadomtsev-Petviashvili equation are computed on non-hyperelliptic Riemann surfaces.

  3. The Theory of Transactional Distance as a Framework for the Analysis of Computer-Aided Teaching of Geometry

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Dagdilelis, Vassilios

    2006-01-01

    In this paper, difficulties of students in the case of computer-mediated teaching of geometry in a traditional classroom are considered within the framework of "transactional distance", a concept well known in distance education. The main interest of this paper is to record and describe in detail the different forms of…

  4. Peer Interactions in a Computer Lab: Reflections on Results of a Case Study Involving Web-Based Dynamic Geometry Sketches

    ERIC Educational Resources Information Center

    Sinclair, Margaret P.

    2005-01-01

    A case study, originally set up to identify and describe some benefits and limitations of using dynamic web-based geometry sketches, provided an opportunity to examine peer interactions in a lab. Since classes were held in a computer lab, teachers and pairs faced the challenges of working and communicating in a lab environment. Research has shown…

  5. Geometry, analysis, and computation in mathematics and applied sciences. Final report

    SciTech Connect

    Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

    1995-12-31

    Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

  6. Scalable, massively parallel approaches to upstream drainage area computation

    NASA Astrophysics Data System (ADS)

    Richardson, A.; Hill, C. N.; Perron, T.

    2011-12-01

    Accumulated drainage area maps of large regions are required for several applications. Among these are assessments of regional patterns of flow and sediment routing, high-resolution landscape evolution models in which drainage basin geometry evolves with time, and surveys of the characteristics of river basins that drain to continental margins. The computation of accumulated drainage areas is accomplished by inferring the vector field of drainage flow directions from a two-dimensional digital elevation map, and then computing the area that drains to each tile. From this map of elevations we can compute the integrated, upstream area that drains to each tile of the map. Generally this last step is done with a recursive algorithm, that accumulates upstream areas sequentially. The inherently serial nature of this restricts the number of tiles that can be included, thereby limiting the resolution of continental-size domains. This is because of the requirements of both memory, which will rise proportionally to the number of tiles, N, and computing time, which is O(N2). The fundamental sequential property of this approach prohibits effective use of large scale parallelism. An alternate method of calculating accumulated drainage area from drainage direction data can be arrived at by reformulating the problem as the solution of a system of simultaneous linear equations. The equations define the relation that the total upslope area of a particular tile is the sum of all the upslope areas for tiles immediately adjacent to that tile that drain to it, and the tile's own area. Solving these equations amounts to finding the solution of a sparse, nine-diagonal matrix operating on a vector for a right-hand-side that is simply the individual tile areas and where the diagonals of the matrix are determined by the landscape geometry. We show how an iterative method, Bi-CGSTAB, can be used to solve this problem in a scalable, massively parallel manner. However, this introduces

  7. An integrated experimental and computational approach for ...

    EPA Pesticide Factsheets

    Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible transformation of one enantiomer into the racemic mixture) and enantiomerization (the reversible conversion of one enantiomer into the other) are poorly understood. To better understand these processes, we investigated the chiral fungicide, triadimefon, which undergoes racemization in soils, water, and organic solvents. Nuclear magnetic resonance (NMR) and gas chromatography / mass spectrometry (GC/MS) techniques were used to measure the rates of enantiomerization and racemization, deuterium isotope effects, and activation energies for triadimefon in H2O and D2O. From these results we were able to determine that: 1) the alpha-carbonyl carbon of triadimefon is the reaction site; 2) cleavage of the C-H (C-D) bond is the rate-determining step; 3) the reaction is base-catalyzed; and 4) the reaction likely involves a symmetrical intermediate. The B3LYP/6–311 + G** level of theory was used to compute optimized geometries, harmonic vibrational frequencies, nature population analysis, and intrinsic reaction coordinates for triadimefon in water and three racemization pathways were hypothesized. This work provides an initial step in developing predictive, structure-based models that are needed to

  8. Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry

    SciTech Connect

    Salari, K; McWherter-Payne, M

    2003-09-15

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  9. Computational flow modeling of a simplified integrated tractor-trailer geometry.

    SciTech Connect

    McWherter-Payne, Mary Anna; Salari, Kambiz

    2003-09-01

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  10. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  11. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  12. Computational approaches to fMRI analysis.

    PubMed

    Cohen, Jonathan D; Daw, Nathaniel; Engelhardt, Barbara; Hasson, Uri; Li, Kai; Niv, Yael; Norman, Kenneth A; Pillow, Jonathan; Ramadge, Peter J; Turk-Browne, Nicholas B; Willke, Theodore L

    2017-02-23

    Analysis methods in cognitive neuroscience have not always matched the richness of fMRI data. Early methods focused on estimating neural activity within individual voxels or regions, averaged over trials or blocks and modeled separately in each participant. This approach mostly neglected the distributed nature of neural representations over voxels, the continuous dynamics of neural activity during tasks, the statistical benefits of performing joint inference over multiple participants and the value of using predictive models to constrain analysis. Several recent exploratory and theory-driven methods have begun to pursue these opportunities. These methods highlight the importance of computational techniques in fMRI analysis, especially machine learning, algorithmic optimization and parallel computing. Adoption of these techniques is enabling a new generation of experiments and analyses that could transform our understanding of some of the most complex-and distinctly human-signals in the brain: acts of cognition such as thoughts, intentions and memories.

  13. Computational Approach to Hyperelliptic Riemann Surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2015-03-01

    We present a computational approach to general hyperelliptic Riemann surfaces in Weierstrass normal form. The surface is given by a list of the branch points, the coefficients of the defining polynomial or a system of cuts for the curve. A canonical basis of the homology is introduced algorithmically for this curve. The periods of the holomorphic differentials and the Abel map are computed with the Clenshaw-Curtis method to achieve spectral accuracy. The code can handle almost degenerate Riemann surfaces. This work generalizes previous work on real hyperelliptic surfaces with prescribed cuts to arbitrary hyperelliptic surfaces. As an example, solutions to the sine-Gordon equation in terms of multi-dimensional theta functions are studied, also in the solitonic limit of these solutions.

  14. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.

  15. Computer Automated Structure Evaluation (CASE) of the teratogenicity of retinoids with the aid of a novel geometry index

    NASA Astrophysics Data System (ADS)

    Klopman, Gilles; Dimayuga, Mario L.

    1990-06-01

    The CASE (Computer Automated Structure Evaluation) program, with the aid of a geometry index for discriminating cis and trans isomers, has been used to study a set of retinoids tested for teratogenicity in hamsters. CASE identified 8 fragments, the most important representing the non-polar terminus of a retinoid with an additional ring system which introduces some rigidity in the isoprenoid side chain. The geometry index helped to identify relevant fragments with an all- trans configuration and to distinguish them from irrelevant fragments with other configurations.

  16. Sculpting the band gap: a computational approach

    PubMed Central

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  17. Sculpting the band gap: a computational approach

    NASA Astrophysics Data System (ADS)

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-10-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations.

  18. Effect of ocular shape and vascular geometry on retinal hemodynamics: a computational model.

    PubMed

    Dziubek, Andrea; Guidoboni, Giovanna; Harris, Alon; Hirani, Anil N; Rusjan, Edmond; Thistleton, William

    2016-08-01

    A computational model for retinal hemodynamics accounting for ocular curvature is presented. The model combines (i) a hierarchical Darcy model for the flow through small arterioles, capillaries and small venules in the retinal tissue, where blood vessels of different size are comprised in different hierarchical levels of a porous medium; and (ii) a one-dimensional network model for the blood flow through retinal arterioles and venules of larger size. The non-planar ocular shape is included by (i) defining the hierarchical Darcy flow model on a two-dimensional curved surface embedded in the three-dimensional space; and (ii) mapping the simplified one-dimensional network model onto the curved surface. The model is solved numerically using a finite element method in which spatial domain and hierarchical levels are discretized separately. For the finite element method, we use an exterior calculus-based implementation which permits an easier treatment of non-planar domains. Numerical solutions are verified against suitably constructed analytical solutions. Numerical experiments are performed to investigate how retinal hemodynamics is influenced by the ocular shape (sphere, oblate spheroid, prolate spheroid and barrel are compared) and vascular architecture (four vascular arcs and a branching vascular tree are compared). The model predictions show that changes in ocular shape induce non-uniform alterations of blood pressure and velocity in the retina. In particular, we found that (i) the temporal region is affected the least by changes in ocular shape, and (ii) the barrel shape departs the most from the hemispherical reference geometry in terms of associated pressure and velocity distributions in the retinal microvasculature. These results support the clinical hypothesis that alterations in ocular shape, such as those occurring in myopic eyes, might be associated with pathological alterations in retinal hemodynamics.

  19. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  20. Validation of Methods for Computational Catalyst Design: Geometries, Structures, and Energies of Neutral and Charged Silver Clusters

    SciTech Connect

    Duanmu, Kaining; Truhlar, Donald G.

    2015-04-30

    We report a systematic study of small silver clusters, Agn, Agn+, and Agn–, n = 1–7. We studied all possible isomers of clusters with n = 5–7. We tested 42 exchange–correlation functionals, and we assess these functionals for their accuracy in three respects: geometries (quantitative prediction of internuclear distances), structures (the nature of the lowest-energy structure, for example, whether it is planar or nonplanar), and energies. We find that the ingredients of exchange–correlation functionals are indicators of their success in predicting geometries and structures: local exchange–correlation functionals are generally better than hybrid functionals for geometries; functionals depending on kinetic energy density are the best for predicting the lowest-energy isomer correctly, especially for predicting two-dimensional to three-dimenstional transitions correctly. The accuracy for energies is less sensitive to the ingredient list. Our findings could be useful for guiding the selection of methods for computational catalyst design.

  1. The Fractal Geometry of Nature; Its Mathematical Basis and Application to Computer Graphics

    DTIC Science & Technology

    1986-01-01

    matematical cotstructr3. It was first popularized by complex renderings of terrain onl a conputer graphic.ý. medium. Fractal geometry has since...geometry has not yet been realized. In the final analysis , we expect that even the skeptical reader will discover the mathematical beauty and...Sinto 10,1] preserves the distribution from the original range ([0,231 - 1]). Analysis of the normalized uniform random numbers S0.0 0.1 = 52 0.1

  2. Computations of Viscous Flows in Complex Geometries Using Multiblock Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Ameri, Ali A.

    1995-01-01

    Generating high quality, structured, continuous, body-fitted grid systems (multiblock grid systems) for complicated geometries has long been a most labor-intensive and frustrating part of simulating flows in complicated geometries. Recently, new methodologies and software have emerged that greatly reduce the human effort required to generate high quality multiblock grid systems for complicated geometries. These methods and software require minimal input form the user-typically, only information about the topology of the block structure and number of grid points. This paper demonstrates the use of the new breed of multiblock grid systems in simulations of internal flows in complicated geometries. The geometry used in this study is a duct with a sudden expansion, a partition, and an array of cylindrical pins. This geometry has many of the features typical of internal coolant passages in turbine blades. The grid system used in this study was generated using a commercially available grid generator. The simulations were done using a recently developed flow solver, TRAF3D.MB, that was specially designed to use multiblock grid systems.

  3. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  4. Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches.

    PubMed

    Rutkowski, Tomasz M

    2016-07-01

    A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification.

  5. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  6. A geometric calibration method for inverse geometry computed tomography using P-matrices

    NASA Astrophysics Data System (ADS)

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-03-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  7. A geometric calibration method for inverse geometry computed tomography using P-matrices.

    PubMed

    Slagowski, Jordan M; Dunkerley, David A P; Hatt, Charles R; Speidel, Michael A

    2016-02-27

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  8. A geometric calibration method for inverse geometry computed tomography using P-matrices

    PubMed Central

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-01-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry. PMID:27375313

  9. Using Dynamic Geometry and Computer Algebra Systems in Problem Based Courses for Future Engineers

    ERIC Educational Resources Information Center

    Tomiczková, Svetlana; Lávicka, Miroslav

    2015-01-01

    It is a modern trend today when formulating the curriculum of a geometric course at the technical universities to start from a real-life problem originated in technical praxis and subsequently to define which geometric theories and which skills are necessary for its solving. Nowadays, interactive and dynamic geometry software plays a more and more…

  10. Computer Metaphors: Approaches to Computer Literacy for Educators.

    ERIC Educational Resources Information Center

    Peelle, Howard A.

    Because metaphors offer ready perspectives for comprehending something new, this document examines various metaphors educators might use to help students develop computer literacy. Metaphors described are the computer as person (a complex system worthy of respect), tool (perhaps the most powerful and versatile known to humankind), brain (both…

  11. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  12. Effect of phenolic radicals on the geometry and electronic structure of DNA base pairs: computational study

    NASA Astrophysics Data System (ADS)

    Zarei, Mohammad; Seif, Abdolvahab; Azizi, Khaled; Zarei, Mohanna; Bahrami, Jamil

    2016-04-01

    In this paper, we show the reaction of a hydroxyl, phenyl and phenoxy radicals with DNA base pairs by the density functional theory (DFT) calculations. The influence of solvation on the mechanism is also presented by the same DFT calculations under the continuum solvation model. The results showed that hydroxyl, phenyl and phenoxy radicals increase the length of the nearest hydrogen bond of adjacent DNA base pair which is accompanied by decrease in the length of furthest hydrogen bond of DNA base pair. Also, hydroxyl, phenyl and phenoxy radicals influenced the dihedral angle between DNA base pairs. According to the results, hydrogen bond lengths between AT and GC base pairs in water solvent are longer than vacuum. All of presented radicals influenced the structure and geometry of AT and GC base pairs, but phenoxy radical showed more influence on geometry and electronic properties of DNA base pairs compared with the phenyl and hydroxyl radicals.

  13. CasimirSim - A Tool to Compute Casimir Polder Forces for Nontrivial 3D Geometries

    SciTech Connect

    Sedmik, Rene; Tajmar, Martin

    2007-01-30

    The so-called Casimir effect is one of the most interesting macro-quantum effects. Being negligible on the macro-scale it becomes a governing factor below structure sizes of 1 {mu}m where it accounts for typically 100 kN m-2. The force does not depend on gravity, or electric charge but solely on the materials properties, and geometrical shape. This makes the effect a strong candidate for micro(nano)-mechanical devices M(N)EMS. Despite a long history of research the theory lacks a uniform description valid for arbitrary geometries which retards technical application. We present an advanced state-of-the-art numerical tool overcoming all the usual geometrical restrictions, capable of calculating arbitrary 3D geometries by utilizing the Casimir Polder approximation for the Casimir force.

  14. Computer-aided evaluation of the railway track geometry on the basis of satellite measurements

    NASA Astrophysics Data System (ADS)

    Specht, Cezary; Koc, Władysław; Chrostowski, Piotr

    2016-05-01

    In recent years, all over the world there has been a period of intensive development of GNSS (Global Navigation Satellite Systems) measurement techniques and their extension for the purpose of their applications in the field of surveying and navigation. Moreover, in many countries a rising trend in the development of rail transportation systems has been noticed. In this paper, a method of railway track geometry assessment based on mobile satellite measurements is presented. The paper shows the implementation effects of satellite surveying railway geometry. The investigation process described in the paper is divided on two phases. The first phase is the GNSS mobile surveying and the analysis obtained data. The second phase is the analysis of the track geometry using the flat coordinates from the surveying. The visualization of the measured route, separation and quality assessment of the uniform geometric elements (straight sections, arcs), identification of the track polygon (main directions and intersection angles) are discussed and illustrated by the calculation example within the article.

  15. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    SciTech Connect

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  16. Examining the Impact of an Integrative Method of Using Technology on Students' Achievement and Efficiency of Computer Usage and on Pedagogical Procedure in Geometry

    ERIC Educational Resources Information Center

    Gurevich, Irina; Gurev, Dvora

    2012-01-01

    In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…

  17. Along-strike complex geometry of subduction zones - an experimental approach

    NASA Astrophysics Data System (ADS)

    Midtkandal, I.; Gabrielsen, R. H.; Brun, J.-P.; Huismans, R.

    2012-04-01

    Recent knowledge of the great geometric and dynamic complexity insubduction zones, combined with new capacity for analogue mechanical and numerical modeling has sparked a number of studies on subduction processes. Not unexpectedly, such models reveal a complex relation between physical conditions during subduction initiation, strength profile of the subducting plate, the thermo-dynamic conditions and the subduction zones geometries. One rare geometrical complexity of subduction that remains particularly controversial, is the potential for polarity shift in subduction systems. The present experiments were therefore performed to explore the influence of the architecture, strength and strain velocity on complexities in subduction zones, focusing on along-strike variation of the collision zone. Of particular concern were the consequences for the geometry and kinematics of the transition zones between segments of contrasting subduction direction. Although the model design to some extent was inspired by the configuration along the Iberian - Eurasian suture zone, the results are also of significance for other orogens with complex along-strike geometries. The experiments were set up to explore the initial state of subduction only, and were accordingly terminated before slab subduction occurred. The model wasbuilt from layers of silicone putty and sand, tailored to simulate the assumed lithospheric geometries and strength-viscosity profiles along the plate boundary zone prior to contraction, and comprises two 'continental' plates separated by a thinner 'oceanic' plate that represents the narrow seaway. The experiment floats on a substrate of sodiumpolytungstate, representing mantle. 24 experimental runs were performed, varying the thickness (and thus strength) of the upper mantle lithosphere, as well as the strain rate. Keeping all other parameters identical for each experiment, the models were shortened by a computer-controlled jackscrew while time-lapse images were

  18. 3D geometry analysis of the medial meniscus--a statistical shape modeling approach.

    PubMed

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-10-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  19. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  20. Deterministic approach for unsteady rarefied flow simulations in complex geometries and its application to gas flows in microsystems

    NASA Astrophysics Data System (ADS)

    Chigullapalli, Sruti

    Micro-electro-mechanical systems (MEMS) are widely used in automotive, communications and consumer electronics applications with microactuators, micro gyroscopes and microaccelerometers being just a few examples. However, in areas where high reliability is critical, such as in aerospace and defense applications, very few MEMS technologies have been adopted so far. Further development of high frequency microsystems such as resonators, RF MEMS, microturbines and pulsed-detonation microengines require improved understanding of unsteady gas dynamics at the micro scale. Accurate computational simulation of such flows demands new approaches beyond the conventional formulations based on the macroscopic constitutive laws. This is due to the breakdown of the continuum hypothesis in the presence of significant non-equilibrium and rarefaction because of large gradients and small scales, respectively. More generally, the motion of molecules in a gas is described by the kinetic Boltzmann equation which is valid for arbitrary Knudsen numbers. However, due to the multidimensionality of the phase space and the complex non-linearity of the collision term, numerical solution of the Boltzmann equation is challenging for practical problems. In this thesis a fully deterministic, as opposed to a statistical, finite volume based three-dimensional solution of Boltzmann ES-BGK model kinetic equation is formulated to enable simulations of unsteady rarefied flows. The main goal of this research is to develop an unsteady rarefied solver integrated with finite volume method (FVM) solver in MEMOSA (MEMS Overall Simulation Administrator) developed by PRISM: NNSA center for Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM) at Purdue and apply it to study micro-scale gas damping. Formulation and verification of finite volume method for unsteady rarefied flow solver based on Boltzmann-ESBGK equations in arbitrary three-dimensional geometries are presented. The solver is

  1. Tumor growth in complex, evolving microenvironmental geometries: A diffuse domain approach

    PubMed Central

    Chen, Ying; Lowengrub, John S.

    2014-01-01

    We develop a mathematical model of tumor growth in complex, dynamic microenvironments with active, deformable membranes. Using a diffuse domain approach, the complex domain is captured implicitly using an auxiliary function and the governing equations are appropriately modified, extended and solved in a larger, regular domain. The diffuse domain method enables us to develop an efficient numerical implementation that does not depend on the space dimension or the microenvironmental geometry. We model homotypic cell-cell adhesion and heterotypic cell-basement membrane (BM) adhesion with the latter being implemented via a membrane energy that models cell-BM interactions. We incorporate simple models of elastic forces and the degradation of the BM and ECM by tumor-secreted matrix degrading enzymes. We investigate tumor progression and BM response as a function of cell-BM adhesion and the stiffness of the BM. We find tumor sizes tend to be positively correlated with cell-BM adhesion since increasing cell-BM adhesion results in thinner, more elongated tumors. Prior to invasion of the tumor into the stroma, we find a negative correlation between tumor size and BM stiffness as the elastic restoring forces tend to inhibit tumor growth. In order to model tumor invasion of the stroma, we find it necessary to downregulate cell-BM adhesiveness, which is consistent with experimental observations. A stiff BM promotes invasiveness because at early stages the opening in the BM created by MDE degradation from tumor cells tends to be narrower when the BM is stiffer. This requires invading cells to squeeze through the narrow opening and thus promotes fragmentation that then leads to enhanced growth and invasion. In three dimensions, the opening in the BM was found to increase in size even when the BM is stiff because of pressure induced by growing tumor clusters. A larger opening in the BM can increase the potential for further invasiveness by increasing the possibility that additional

  2. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  3. Computational issues of importance to the inverse recovery of epicardial potentials in a realistic heart-torso geometry.

    PubMed

    Messinger-Rapport, B J; Rudy, Y

    1989-11-01

    In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.

  4. Optimization of a DPI Inhaler: A Computational Approach.

    PubMed

    Milenkovic, Jovana; Alexopoulos, Aleck H; Kiparissides, Costas

    2017-03-01

    Alternate geometries of a commercial dry powder inhaler (DPI, i.e., Turbuhaler; AstraZeneca, London, UK) are proposed based on the simulation results obtained from a fluid and particle dynamic computational model, previously developed by Milenkovic et al. The alternate DPI geometries are constructed by simple alterations to components of the commercial inhaler device leading to smoother flow patterns in regions where significant particle-wall collisions occur. The modified DPIs are investigated under the same conditions of the original studies of Milenkovic et al. for a wide range of inhalation flow rates (i.e., 30-70 L/min). Based on the computational results in terms of total particle deposition and fine particle fraction, the modified DPIs were improved over the original design of the commercial device.

  5. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  6. CMEIAS JFrad: a digital computing tool to discriminate the fractal geometry of landscape architectures and spatial patterns of individual cells in microbial biofilms.

    PubMed

    Ji, Zhou; Card, Kyle J; Dazzo, Frank B

    2015-04-01

    Image analysis of fractal geometry can be used to gain deeper insights into complex ecophysiological patterns and processes occurring within natural microbial biofilm landscapes, including the scale-dependent heterogeneities of their spatial architecture, biomass, and cell-cell interactions, all driven by the colonization behavior of optimal spatial positioning of organisms to maximize their efficiency in utilization of allocated nutrient resources. Here, we introduce CMEIAS JFrad, a new computing technology that analyzes the fractal geometry of complex biofilm architectures in digital landscape images. The software uniquely features a data-mining opportunity based on a comprehensive collection of 11 different mathematical methods to compute fractal dimension that are implemented into a wizard design to maximize ease-of-use for semi-automatic analysis of single images or fully automatic analysis of multiple images in a batch process. As examples of application, quantitative analyses of fractal dimension were used to optimize the important variable settings of brightness threshold and minimum object size in order to discriminate the complex architecture of freshwater microbial biofilms at multiple spatial scales, and also to differentiate the spatial patterns of individual bacterial cells that influence their cooperative interactions, resource use, and apportionment in situ. Version 1.0 of JFrad is implemented into a software package containing the program files, user manual, and tutorial images that will be freely available at http://cme.msu.edu/cmeias/. This improvement in computational image informatics will strengthen microscopy-based approaches to analyze the dynamic landscape ecology of microbial biofilm populations and communities in situ at spatial resolutions that range from single cells to microcolonies.

  7. Molecular geometry of vanadium dichloride and vanadium trichloride: a gas-phase electron diffraction and computational study.

    PubMed

    Varga, Zoltán; Vest, Brian; Schwerdtfeger, Peter; Hargittai, Magdolna

    2010-03-15

    The molecular geometries of VCl2 and VCl3 have been determined by computations and gas-phase electron diffraction (ED). The ED study is a reinvestigation of the previously published analysis for VCl2. The structure of the vanadium dichloride dimer has also been calculated. According to our joint ED and computational study, the evaporation of a solid sample of VCl2 resulted in about 66% vanadium trichloride and 34% vanadium dichloride in the vapor. Vanadium dichloride is unambiguously linear in its 4Sigma(g)+ ground electronic state. For VCl3, all computations yielded a Jahn-Teller-distorted ground-state structure of C(2v) symmetry. However, it lies merely less than 3 kJ/mol lower than the 3E'' state (D(3h) symmetry). Due to the dynamic nature of the Jahn-Teller effect in this case, rigorous distinction cannot be made between the planar models of either D(3h) symmetry or C(2v) symmetry for the equilibrium structure of VCl3. Furthermore, the presence of several low-lying excited electronic states of VCl3 is expected in the high-temperature vapor. To our knowledge, this is the first experimental and computational study of the VCl3 molecule.

  8. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  9. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  10. A 3D Computational fluid dynamics model validation for candidate molybdenum-99 target geometry

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Dale, Greg; Vorobieff, Peter

    2014-11-01

    Molybdenum-99 (99Mo) is the parent product of technetium-99m (99mTc), a radioisotope used in approximately 50,000 medical diagnostic tests per day in the U.S. The primary uses of this product include detection of heart disease, cancer, study of organ structure and function, and other applications. The US Department of Energy seeks new methods for generating 99Mo without the use of highly enriched uranium, to eliminate proliferation issues and provide a domestic supply of 99mTc for medical imaging. For this project, electron accelerating technology is used by sending an electron beam through a series of 100Mo targets. During this process a large amount of heat is created, which directly affects the operating temperature dictated by the tensile stress limit of the wall material. To maintain the required temperature range, helium gas is used as a cooling agent that flows through narrow channels between the target disks. In our numerical study, we investigate the cooling performance on a series of new geometry designs of the cooling channel. This research is supported by Los Alamos National Laboratory.

  11. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  12. Batch Computer Scheduling: A Heuristically Motivated Approach

    DTIC Science & Technology

    1974-09-01

    Massachusetts, 1957. FRANH 70 Frank, H., I. T. Frisch, and W. Chou, "Topologlcal Considerations In the Design of the ARPA Computor Network," AFIPS...Conference Proceedings, 1970 Spring Joint Computer Conference, pp. 531-587. FRANH 72 FranK, H., R. E. Kahn, and L Kleinrock, " Computor

  13. Experimental and Computational Study of the Flow past a Simplified Geometry of an Engine/Pylon/Wing Installation at low velocity/moderate incidence flight conditions

    NASA Astrophysics Data System (ADS)

    Bury, Yannick; Lucas, Matthieu; Bonnaud, Cyril; Joly, Laurent; ISAE Team; Airbus Team

    2014-11-01

    We study numerically and experimentally the vortices that develop past a model geometry of a wing equipped with pylon-mounted engine at low speed/moderate incidence flight conditions. For such configuration, the presence of the powerplant installation under the wing initiates a complex, unsteady vortical flow field at the nacelle/pylon/wing junctions. Its interaction with the upper wing boundary layer causes a drop of aircraft performances. In order to decipher the underlying physics, this study is initially conducted on a simplified geometry at a Reynolds number of 200000, based on the chord wing and on the freestream velocity. Two configurations of angle of attack and side-slip angle are investigated. This work relies on unsteady Reynolds Averaged Navier Stokes computations, oil flow visualizations and stereoscopic Particle Image Velocimetry measurements. The vortex dynamics thus produced is described in terms of vortex core position, intensity, size and turbulent intensity thanks to a vortex tracking approach. In addition, the analysis of the velocity flow fields obtained from PIV highlights the influence of the longitudinal vortex initiated at the pylon/wing junction on the separation process of the boundary layer near the upper wing leading-edge.

  14. Kindergarteners' Achievement on Geometry and Measurement Units That Incorporate a Gifted Education Approach

    ERIC Educational Resources Information Center

    Casa, Tutita M.; Firmender, Janine M.; Gavin, M. Katherine; Carroll, Susan R.

    2017-01-01

    This research responds to the call by early childhood educators advocating for more challenging mathematics curriculum at the primary level. The kindergarten Project M[superscript 2] units focus on challenging geometry and measurement concepts by positioning students as practicing mathematicians. The research reported herein highlights the…

  15. Specific heat critical amplitudes and the approach to bulk criticality in parallel plate geometries

    NASA Astrophysics Data System (ADS)

    Leite, M. M.; Nemirovsky, A. M.; Coutinho-Filho, M. D.

    1992-02-01

    We calculate the universal ration A+/ A- of the specific heat critical amplitudes of an Ising system confined in a layered geometry of thickness L in the regime L/ξ≥1, where ξ is the bulk critical correlation length. Using field-theoretic renormalization-group techniques we determine A+/ A- under various surface boundary conditions for the local field.

  16. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  17. Computational study of pulsatile blood flow in prototype vessel geometries of coronary segments

    PubMed Central

    Chaniotis, A.K.; Kaiktsis, L.; Katritsis, D.; Efstathopoulos, E.; Pantos, I.; Marmarellis, V.

    2010-01-01

    The spatial and temporal distributions of wall shear stress (WSS) in prototype vessel geometries of coronary segments are investigated via numerical simulation, and the potential association with vascular disease and specifically atherosclerosis and plaque rupture is discussed. In particular, simulation results of WSS spatio-temporal distributions are presented for pulsatile, non-Newtonian blood flow conditions for: (a) curved pipes with different curvatures, and (b) bifurcating pipes with different branching angles and flow division. The effects of non-Newtonian flow on WSS (compared to Newtonian flow) are found to be small at Reynolds numbers representative of blood flow in coronary arteries. Specific preferential sites of average low WSS (and likely atherogenesis) were found at the outer regions of the bifurcating branches just after the bifurcation, and at the outer-entry and inner-exit flow regions of the curved vessel segment. The drop in WSS was more dramatic at the bifurcating vessel sites (less than 5% of the pre-bifurcation value). These sites were also near rapid gradients of WSS changes in space and time – a fact that increases the risk of rupture of plaque likely to develop at these sites. The time variation of the WSS spatial distributions was very rapid around the start and end of the systolic phase of the cardiac cycle, when strong fluctuations of intravascular pressure were also observed. These rapid and strong changes of WSS and pressure coincide temporally with the greatest flexion and mechanical stresses induced in the vessel wall by myocardial motion (ventricular contraction). The combination of these factors may increase the risk of plaque rupture and thrombus formation at these sites. PMID:20400349

  18. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  19. A Social Construction Approach to Computer Science Education

    ERIC Educational Resources Information Center

    Machanick, Philip

    2007-01-01

    Computer science education research has mostly focused on cognitive approaches to learning. Cognitive approaches to understanding learning do not account for all the phenomena observed in teaching and learning. A number of apparently successful educational approaches, such as peer assessment, apprentice-based learning and action learning, have…

  20. A declarative approach to visualizing concurrent computations

    SciTech Connect

    Roman, G.C.; Cox, K.C. )

    1989-10-01

    That visualization can play a key role in the exploration of concurrent computations is central to the ideas presented. Equally important, although given less emphasis, is concern that the full potential of visualization may not be reached unless the art of generating beautiful pictures is rooted in a solid, formally technical foundation. The authors show that program verification provides a formal framework around which such a foundation can be built. Making these ideas a practical reality will require both research and experimentation.

  1. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression

    PubMed Central

    Kroos, Julia M.; Diez, Ibai; Cortes, Jesus M.; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine. PMID:26869913

  2. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression.

    PubMed

    Kroos, Julia M; Diez, Ibai; Cortes, Jesus M; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine.

  3. Real geometry gyrokinetic PIC computations of ion turbulence in tokamak discharges with SUMMIT/PG3EQ_NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Rhodes, Terry; Dimits, Andris; Shumaker, Dan

    2006-10-01

    The PG3EQ_NC module within the SUMMIT Gyrokinetic PIC FORTRAN90 Framework makes possible 3D nonlinear toroidal computations of ion turbulence in the real geometry of DIII-D discharges. This is accomplished with the use of local, field line following, quasi-ballooning coordinates and through a direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes, as well as Holger Saint John's PLOTEQ code for the (R, Z) position of each flux surface. The effect of real geometry is being elucidated with CYCLONE shot by comparing results for growth rates and diffusivities from PGEQ_NC to those of its circular counterpart. The PG3EQ_NC module is also being used to model ion channel turbulence in DIII-D discharges 118561 and 120327. Linear results will be compared to growth rate calculations with the GKS code. Nonlinear results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  4. Real geometry gyrokinetic PIC computations of ion turbulence in advanced tokamak discharges with SUMMIT/PG3EQ_/NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Decyk, Viktor; Rhodes, Terry; Dimits, Andris; Shumaker, Dan

    2006-04-01

    The PG3EQ_/NC module within the SUMMIT Gyrokinetic PIC FORTRAN90 Framework makes possible 3D nonlinear toroidal computations of ion turbulence in the real geometry of DIII-D discharges. This is accomplished with the use of local, field line following, quasi-ballooning coordinates and through a direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes, as well as Holger Saint John's PLOTEQ code for the (R, Z) position of each flux surface. The effect of real geometry is being elucidated with CYCLONE shot 81499 by comparing results from PGEQ_/NC to those of its circular counterpart. The PG3EQ_/NC module is also being used to model ion channel turbulence in advanced tokamak discharges 118561 and 120327. Linear results will be compared to growth rate calculations with the GKS code. Nonlinear results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  5. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  6. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  7. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  8. Computation of leading edge film cooling from a CONSOLE geometry (CONverging Slot hOLE)

    NASA Astrophysics Data System (ADS)

    Guelailia, A.; Khorsi, A.; Hamidou, M. K.

    2016-01-01

    The aim of this study is to investigate the effect of mass flow rate on film cooling effectiveness and heat transfer over a gas turbine rotor blade with three staggered rows of shower-head holes which are inclined at 30° to the spanwise direction, and are normal to the streamwise direction on the blade. To improve film cooling effectiveness, the standard cylindrical holes, located on the leading edge region, are replaced with the converging slot holes (console). The ANSYS CFX has been used for this computational simulation. The turbulence is approximated by a k-ɛ model. Detailed film effectiveness distributions are presented for different mass flow rate. The numerical results are compared with experimental data.

  9. A computational framework to characterize and compare the geometry of coronary networks.

    PubMed

    Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A

    2017-03-01

    This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd.

  10. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  11. A bionic approach to mathematical modeling the fold geometry of deployable reflector antennas on satellites

    NASA Astrophysics Data System (ADS)

    Feng, C. M.; Liu, T. S.

    2014-10-01

    Inspired from biology, this study presents a method for designing the fold geometry of deployable reflectors. Since the space available inside rockets for transporting satellites with reflector antennas is typically cylindrical in shape, and its cross-sectional area is considerably smaller than the reflector antenna after deployment, the cross-sectional area of the folded reflector must be smaller than the available rocket interior space. Membrane reflectors in aerospace are a type of lightweight structure that can be packaged compactly. To design membrane reflectors from the perspective of deployment processes, bionic applications from morphological changes of plants are investigated. Creating biologically inspired reflectors, this paper deals with fold geometry of reflectors, which imitate flower buds. This study uses mathematical formulation to describe geometric profiles of flower buds. Based on the formulation, new designs for deployable membrane reflectors derived from bionics are proposed. Adjusting parameters in the formulation of these designs leads to decreases in reflector area before deployment.

  12. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  13. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  14. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  15. Computational approaches to natural product discovery.

    PubMed

    Medema, Marnix H; Fischbach, Michael A

    2015-09-01

    Starting with the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering longstanding questions in microbial ecology regarding the roles of metabolites in interspecies interactions.

  16. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  17. Computational analysis of a rarefied hypersonic flow over combined gap/step geometries

    NASA Astrophysics Data System (ADS)

    Leite, P. H. M.; Santos, W. F. N.

    2015-06-01

    This work describes a computational analysis of a hypersonic flow over a combined gap/step configuration at zero degree angle of attack, in chemical equilibrium and thermal nonequilibrium. Effects on the flowfield structure due to changes on the step frontal-face height have been investigated by employing the Direct Simulation Monte Carlo (DSMC) method. The work focuses the attention of designers of hypersonic configurations on the fundamental parameter of surface discontinuity, which can have an important impact on even initial designs. The results highlight the sensitivity of the primary flowfield properties, velocity, density, pressure, and temperature due to changes on the step frontal-face height. The analysis showed that the upstream disturbance in the gap/step configuration increased with increasing the frontal-face height. In addition, it was observed that the separation region for the gap/step configuration increased with increasing the step frontal-face height. It was found that density and pressure for the gap/step configuration dramatically increased inside the gap as compared to those observed for the gap configuration, i. e., a gap without a step.

  18. Geometry and Topology of Two-Dimensional Dry Foams: Computer Simulation and Experimental Characterization.

    PubMed

    Tong, Mingming; Cole, Katie; Brito-Parada, Pablo R; Neethling, Stephen; Cilliers, Jan J

    2017-04-05

    Pseudo-two-dimensional (2D) foams are commonly used in foam studies as it is experimentally easier to measure the bubble size distribution and other geometric and topological properties of these foams than it is for a 3D foam. Despite the widespread use of 2D foams in both simulation and experimental studies, many important geometric and topological relationships are still not well understood. Film size, for example, is a key parameter in the stability of bubbles and the overall structure of foams. The relationship between the size distribution of the films in a foam and that of the bubbles themselves is thus a key relationship in the modeling and simulation of unstable foams. This work uses structural simulation from Surface Evolver to statistically analyze this relationship and to ultimately formulate a relationship for the film size in 2D foams that is shown to be valid across a wide range of different bubble polydispersities. These results and other topological features are then validated using digital image analysis of experimental pseudo-2D foams produced in a vertical Hele-Shaw cell, which contains a monolayer of bubbles between two plates. From both the experimental and computational results, it is shown that there is a distribution of sizes that a film can adopt and that this distribution is very strongly dependent on the sizes of the two bubbles to which the film is attached, especially the smaller one, but that it is virtually independent of the underlying polydispersity of the foam.

  19. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  20. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  1. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  2. In silico drug discovery approaches on grid computing infrastructures.

    PubMed

    Wolf, Antje; Shahid, Mohammad; Kasam, Vinod; Ziegler, Wolfgang; Hofmann-Apitius, Martin

    2010-02-01

    The first step in finding a "drug" is screening chemical compound databases against a protein target. In silico approaches like virtual screening by molecular docking are well established in modern drug discovery. As molecular databases of compounds and target structures are becoming larger and more and more computational screening approaches are available, there is an increased need in compute power and more complex workflows. In this regard, computational Grids are predestined and offer seamless compute and storage capacity. In recent projects related to pharmaceutical research, the high computational and data storage demands of large-scale in silico drug discovery approaches have been addressed by using Grid computing infrastructures, in both; pharmaceutical industry as well as academic research. Grid infrastructures are part of the so-called eScience paradigm, where a digital infrastructure supports collaborative processes by providing relevant resources and tools for data- and compute-intensive applications. Substantial computing resources, large data collections and services for data analysis are shared on the Grid infrastructure and can be mobilized on demand. This review gives an overview on the use of Grid computing for in silico drug discovery and tries to provide a vision of future development of more complex and integrated workflows on Grids, spanning from target identification and target validation via protein-structure and ligand dependent screenings to advanced mining of large scale in silico experiments.

  3. Computational approaches for RNA energy parameter estimation

    PubMed Central

    Andronescu, Mirela; Condon, Anne; Hoos, Holger H.; Mathews, David H.; Murphy, Kevin P.

    2010-01-01

    Methods for efficient and accurate prediction of RNA structure are increasingly valuable, given the current rapid advances in understanding the diverse functions of RNA molecules in the cell. To enhance the accuracy of secondary structure predictions, we developed and refined optimization techniques for the estimation of energy parameters. We build on two previous approaches to RNA free-energy parameter estimation: (1) the Constraint Generation (CG) method, which iteratively generates constraints that enforce known structures to have energies lower than other structures for the same molecule; and (2) the Boltzmann Likelihood (BL) method, which infers a set of RNA free-energy parameters that maximize the conditional likelihood of a set of reference RNA structures. Here, we extend these approaches in two main ways: We propose (1) a max-margin extension of CG, and (2) a novel linear Gaussian Bayesian network that models feature relationships, which effectively makes use of sparse data by sharing statistical strength between parameters. We obtain significant improvements in the accuracy of RNA minimum free-energy pseudoknot-free secondary structure prediction when measured on a comprehensive set of 2518 RNA molecules with reference structures. Our parameters can be used in conjunction with software that predicts RNA secondary structures, RNA hybridization, or ensembles of structures. Our data, software, results, and parameter sets in various formats are freely available at http://www.cs.ubc.ca/labs/beta/Projects/RNA-Params. PMID:20940338

  4. Real geometry gyrokinetic PIC computations of ion turbulence in advanced tokamak discharges with SUMMIT/PG3EQ/NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Dimits, Andris; Shumaker, Dan

    2005-10-01

    Development of the PG3EQ/NC module within the SUMMIT gyrokinetic PIC FORTRAN 90 framework is largely completed. It provides SUMMIT with the capability of performing 3D nonlinear toroidal gyrokinetic computations of ion turbulence in real DIII-D geometry. PG3EQ/NC uses local, field line following, quasi-ballooning coordinates and direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes. In addition, Holger Saint John's PLOTEQ code is used to determine the (r,z) position of each flux surface. The thus initialized SUMMIT computations have been carried out for shot /118561 at times 01450 and 02050 at many of the 51 flux surfaces from the core to the edge. Linear SUMMIT results will be compared to available data from calculations with the GKS code for the same discharges. Nonlinear SUMMIT results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  5. Computational approach to the study of thermal spin crossover phenomena

    SciTech Connect

    Rudavskyi, Andrii; Broer, Ria; Sousa, Carmen

    2014-05-14

    The key parameters associated to the thermally induced spin crossover process have been calculated for a series of Fe(II) complexes with mono-, bi-, and tridentate ligands. Combination of density functional theory calculations for the geometries and for normal vibrational modes, and highly correlated wave function methods for the energies, allows us to accurately compute the entropy variation associated to the spin transition and the zero-point corrected energy difference between the low- and high-spin states. From these values, the transition temperature, T{sub 1/2}, is estimated for different compounds.

  6. Evaluation and optimization of the performance of frame geometries for lithium-ion battery application by computer simulation

    NASA Astrophysics Data System (ADS)

    Miranda, D.; Miranda, F.; Costa, C. M.; Almeida, A. M.; Lanceros-Méndez, S.

    2016-06-01

    Tailoring battery geometries is essential for many applications, as geometry influences the delivered capacity value. Two geometries, frame and conventional, have been studied and, for a given scan rate of 330C, the square frame shows a capacity value of 305,52 Ahm-2, which is 527 times higher than the one for the conventional geometry for a constant the area of all components.

  7. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  8. Spatial stochastic and analytical approaches to describe the complex hydraulic variability inherent channel geometry

    NASA Astrophysics Data System (ADS)

    Hadadin, N.

    2011-07-01

    The effects of basin hydrology on channel hydraulic variability for incised streams were investigated using available field data sets and models of watershed hydrology and channel hydraulics for Yazoo River Basin, USA. The study presents the hydraulic relations of bankfull discharge, channel width, mean depth, cross- sectional area, longitudinal slope, unit stream power, and runoff production as a function of drainage area using simple linear regression. The hydraulic geometry relations were developed for sixty one streams, twenty of them are classified as channel evaluation model (CEM) Types IV and V and forty one of them are streams of CEM Types II and III. These relationships are invaluable to hydraulic and water resources engineers, hydrologists, and geomorphologists, involved in stream restoration and protection. These relations can be used to assist in field identification of bankfull stage and stream dimension in un-gauged watersheds as well as estimation of the comparative stability of a stream channel. Results of this research show good fit of hydraulic geometry relationships in the Yazoo River Basin. The relations indicate that bankfull discharge, channel width, mean depth, cross-sectional area have stronger correlation to changes in drainage area than the longitudinal slope, unit stream power, and runoff production for streams CEM Types II and III. The hydraulic geometry relations show that runoff production, bankfull discharge, cross-sectional area, and unit stream power are much more responsive to changes in drainage area than are channel width, mean depth, and slope for streams of CEM Types IV and V. Also, the relations show that bankfull discharge and cross-sectional area are more responsive to changes in drainage area than are other hydraulic variables for streams of CEM Types II and III. The greater the regression slope, the more responsive to changes in drainage area will be.

  9. Geometry-driven diffusion: an alternative approach to image filtering/segmentation in diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Bajla, Ivan

    1998-02-01

    The major goal of this survey is to provide the reader with the motivation of image filtering and segmentation in diagnostic imaging, with the brief overview of the state-of- the-art of nonlinear filters based on the geometry-driven diffusion (GDD), and with a possible generalization of the GDD-filtering towards the complex problem of image segmentation, stated as minimization of particular energy functionals. An example of the application of the GDD- filtering to the task of 3D visualization of MRI data of the brain is illustrated and discussed in the paper.

  10. Ultrasonic approach for formation of erbium oxide nanoparticles with variable geometries.

    PubMed

    Radziuk, Darya; Skirtach, André; Gessner, Andre; Kumke, Michael U; Zhang, Wei; Möhwald, Helmuth; Shchukin, Dmitry

    2011-12-06

    Ultrasound (20 kHz, 29 W·cm(-2)) is employed to form three types of erbium oxide nanoparticles in the presence of multiwalled carbon nanotubes as a template material in water. The nanoparticles are (i) erbium carboxioxide nanoparticles deposited on the external walls of multiwalled carbon nanotubes and Er(2)O(3) in the bulk with (ii) hexagonal and (iii) spherical geometries. Each type of ultrasonically formed nanoparticle reveals Er(3+) photoluminescence from crystal lattice. The main advantage of the erbium carboxioxide nanoparticles on the carbon nanotubes is the electromagnetic emission in the visible region, which is new and not examined up to the present date. On the other hand, the photoluminescence of hexagonal erbium oxide nanoparticles is long-lived (μs) and enables the higher energy transition ((4)S(3/2)-(4)I(15/2)), which is not observed for spherical nanoparticles. Our work is unique because it combines for the first time spectroscopy of Er(3+) electronic transitions in the host crystal lattices of nanoparticles with the geometry established by ultrasound in aqueous solution of carbon nanotubes employed as a template material. The work can be of great interest for "green" chemistry synthesis of photoluminescent nanoparticles in water.

  11. Influence of Subducting Plate Geometry on Upper Plate Deformation at Orogen Syntaxes: A Thermomechanical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Nettesheim, Matthias; Ehlers, Todd; Whipp, David

    2016-04-01

    Syntaxes are short, convex bends in the otherwise slightly concave plate boundaries of subduction zones. These regions are of scientific interest because some syntaxes (e.g., the Himalaya or St. Elias region in Alaska) exhibit exceptionally rapid, focused rock uplift. These areas have led to a hypothesized connection between erosional and tectonic processes (top-down control), but have so far neglected the unique 3D geometry of the subducting plates at these locations. In this study, we contribute to this discussion by exploring the idea that subduction geometry may be sufficient to trigger focused tectonic uplift in the overriding plate (a bottom-up control). For this, we use a fully coupled 3D thermomechanical model that includes thermochronometric age prediction. The downgoing plate is approximated as spherical indenter of high rigidity, whereas both viscous and visco-plastic material properties are used to model deformation in the overriding plate. We also consider the influence of the curvature of the subduction zone and the ratio of subduction velocity to subduction zone advance. We evaluate these models with respect to their effect on the upper plate exhumation rates and localization. Results indicate that increasing curvature of the indenter and a stronger upper crust lead to more focused tectonic uplift, whereas slab advance causes the uplift focus to migrate and thus may hinder the emergence of a positive feedback.

  12. Analytic reconstruction approach for parallel translational computed tomography.

    PubMed

    Kong, Huihua; Yu, Hengyong

    2015-01-01

    To develop low-cost and low-dose computed tomography (CT) scanners for developing countries, recently a parallel translational computed tomography (PTCT) is proposed, and the source and detector are translated oppositely with respect to the imaging object without a slip-ring. In this paper, we develop an analytic filtered-backprojection (FBP)-type reconstruction algorithm for two dimensional (2D) fan-beam PTCT and extend it to three dimensional (3D) cone-beam geometry in a Feldkamp-type framework. Particularly, a weighting function is constructed to deal with data redundancy for multiple translations PTCT to eliminate image artifacts. Extensive numerical simulations are performed to validate and evaluate the proposed analytic reconstruction algorithms, and the results confirm their correctness and merits.

  13. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  14. On the Geometry of the Berry-Robbins Approach to Spin-Statistics

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Nikolaos; Reyes-Lega, Andrés F.

    2010-07-01

    Within a geometric and algebraic framework, the structures which are related to the spin-statistics connection are discussed. A comparison with the Berry-Robbins approach is made. The underlying geometric structure constitutes an additional support for this approach. In our work, a geometric approach to quantum indistinguishability is introduced which allows the treatment of singlevaluedness of wave functions in a global, model independent way.

  15. Speckle interferometry from fiber-reinforced materials:A fractal geometry approach

    NASA Astrophysics Data System (ADS)

    Horta, J. M.; Castano, V. M.

    Speckle field studies were performed on fiber-modified Portland cement-based microconcrete beam models subjected to flexural loading. The resulting speckle fields were analyzed in terms of their associated mass fractal dimension by using digital image processing techniques. The experiments showed a change in the fractal dimension of the speckle fields as a function both of the loading and the structure of the microconcrete beams. A study was also conducted on the free-damped frequencies of the beams, which allowed to draw a fractal dimension vs. frequency plot on each loading cycle. These results allow to foresee the use of fractal geometry as a promising tool for better understanding the mechanical behavior of structures.

  16. Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    SciTech Connect

    Rodman, Leiba; Spitkovsky, Ilya M. E-mail: ilya@math.wm.edu; Szkoła, Arleta Weis, Stephan

    2016-01-15

    We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.

  17. Investigation of voxel warping and energy mapping approaches for fast 4D Monte Carlo dose calculations in deformed geometries using VMC++

    NASA Astrophysics Data System (ADS)

    Heath, Emily; Tessier, Frederic; Kawrakow, Iwan

    2011-08-01

    A new deformable geometry class for the VMC++ Monte Carlo code was implemented based on the voxel warping method. Alternative geometries which use tetrahedral sub-elements were implemented and efficiency improvements investigated. A new energy mapping method, based on calculating the volume overlap between deformed reference dose grid and the target dose grid, was also developed. Dose calculations using both the voxel warping and energy mapping methods were compared in simple phantoms as well as a patient geometry. The new deformed geometry implementation in VMC++ increased calculation times by approximately a factor of 6 compared to standard VMC++ calculations in rectilinear geometries. However, the tetrahedron-based geometries were found to improve computational efficiency, relative to the dodecahedron-based geometry, by a factor of 2. When an exact transformation between the reference and target geometries was provided, the voxel and energy warping methods produced identical results. However, when the transformation is not exact, there were discrepancies in the energy deposited on the target geometry which lead to significant differences in the dose calculated by the two methods. Preliminary investigations indicate that these energy differences may correlate with registration errors; however, further work is needed to determine the usefulness of this metric for quantifying registration accuracy.

  18. Flyby Geometry Optimization Tool

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2007-01-01

    The Flyby Geometry Optimization Tool is a computer program for computing trajectories and trajectory-altering impulsive maneuvers for spacecraft used in radio relay of scientific data to Earth from an exploratory airplane flying in the atmosphere of Mars.

  19. Spatio-temporal EEG source localization using a three-dimensional subspace FINE approach in a realistic geometry inhomogeneous head model.

    PubMed

    Ding, Lei; He, Bin

    2006-09-01

    The subspace source localization approach, i.e., first principle vectors (FINE), is able to enhance the spatial resolvability and localization accuracy for closely-spaced neural sources from EEG and MEG measurements. Computer simulations were conducted to evaluate the performance of the FINE algorithm in an inhomogeneous realistic geometry head model under a variety of conditions. The source localization abilities of FINE were examined at different cortical regions and at different depths. The present computer simulation results indicate that FINE has enhanced source localization capability, as compared with MUSIC and RAP-MUSIC, when sources are closely spaced, highly noise-contaminated, or inter-correlated. The source localization accuracy of FINE is better, for closely-spaced sources, than MUSIC at various noise levels, i.e., signal-to-noise ratio (SNR) from 6 dB to 16 dB, and RAP-MUSIC at relatively low noise levels, i.e., 6 dB to 12 dB. The FINE approach has been further applied to localize brain sources of motor potentials, obtained during the finger tapping tasks in a human subject. The experimental results suggest that the detailed neural activity distribution could be revealed by FINE. The present study suggests that FINE provides enhanced performance in localizing multiple closely spaced, and inter-correlated sources under low SNR, and may become an important alternative to brain source localization from EEG or MEG.

  20. A Comparative Study of Achievement in the Concepts of Fundamentals of Geometry Taught by Computer Managed Individualized Behavioral Objective Instructional Units Versus Lecture-Demonstration Methods of Instruction.

    ERIC Educational Resources Information Center

    Fisher, Merrill Edgar

    The purposes of this study were (1) to identify and compare the effect on student achievement of an individualized computer-managed geometry course, built on behavioral objectives, with traditional instructional methods; and (2) to identify how selected individual aptitudes interact with the two instructional modes. The subjects were…

  1. The Interpretative Flexibility, Instrumental Evolution, and Institutional Adoption of Mathematical Software in Educational Practice: The Examples of Computer Algebra and Dynamic Geometry

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2008-01-01

    This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and…

  2. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking.

  3. A computational strategy for geometry optimization of ionic and covalent excited states, applied to butadiene and hexatriene.

    PubMed

    Boggio-Pasqua, Martial; Bearpark, Michael J; Klene, Michael; Robb, Michael A

    2004-05-01

    We propose a computational strategy that enables ionic and covalent pipi* excited states to be described in a balanced way. This strategy depends upon (1) the restricted active space self-consistent field method, in which the dynamic correlation between core sigma and valence pi electrons can be described by adding single sigma excitations to all pi configurations and (2) the use of a new conventional one-electron basis set specifically designed for the description of valence ionic states. Together, these provide excitation energies comparable with more accurate and expensive ab initio methods--e.g., multiconfigurational second-order perturbation theory and multireference configuration interaction. Moreover, our strategy also allows full optimization of excited-state geometries--including conical intersections between ionic and covalent excited states--to be routinely carried out, thanks to the availability of analytical energy gradients. The prototype systems studied are the cis and trans isomers of butadiene and hexatriene, for which the ground 1A(1/g), lower-lying dark (i.e., symmetry forbidden covalent) 2A(1/g) and spectroscopic 1B(2/u) (valence ionic) states were investigated.

  4. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry

    PubMed Central

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623

  5. Euclidean Geometry via Programming.

    ERIC Educational Resources Information Center

    Filimonov, Rossen; Kreith, Kurt

    1992-01-01

    Describes the Plane Geometry System computer software developed at the Educational Computer Systems laboratory in Sofia, Bulgaria. The system enables students to use the concept of "algorithm" to correspond to the process of "deductive proof" in the development of plane geometry. Provides an example of the software's capability…

  6. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  7. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  8. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  9. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  10. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  11. Higher spin approaches to quantum field theory and (psuedo)-Riemannian geometries

    NASA Astrophysics Data System (ADS)

    Hallowell, Karl Evan

    In this thesis, we study a number of higher spin quantum field theories and some of their algebraic and geometric consequences. These theories apply mostly either over constant curvature or more generally symmetric pseudo-Riemannian manifolds. The first part of this dissertation covers a superalgebra coming from a family of particle models over symmetric spaces. These theories are novel in that the symmetries of the (super)algebra osp( Q|2p) are larger and more elaborate than traditional symmetries. We construct useful (super)algebras related to and generalizing old work by Lichnerowicz and describe their role in developing the geometry of massless models with osp(Q|2 p) symmetry. The result is two practical applications of these (super)algebras: (1) a lunch more concise description of a family of higher spin quantum field theories; and (2) an interesting algebraic probe of underlying background geometries. We also consider massive models over constant curvature spaces. We use a radial dimensional reduction process which converts massless models into massive ones over a lower dimensional space. In our case, we take from the family of theories above the particular free, massless model over flat space associated with sp(2, R ) and derive a massive model. In the process, we develop a novel associative algebra, which is a deformation of the original differential operator algebra associated with the sp(2, R ) model. This algebra is interesting in its own right since its operators realize the representation structure of the sp(2, R ) group. The massive model also has implications for a sequence of unusual, "partially massless" theories. The derivation illuminates how reduced degrees of freedom become manifest in these particular models. Finally, we study a Yang-Mills model using an on-shell Poincare Yang-Mills twist of the Maxwell complex along with a non-minimal coupling. This is a special, higher spin case of a quantum field theory called a Yang-Mills detour complex

  12. Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  13. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  14. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Stanescu, D.; Hussaini, M. Y.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.

  15. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  16. Comparison of kinetic and extended magnetohydrodynamics computational models for the linear ion temperature gradient instability in slab geometry

    NASA Astrophysics Data System (ADS)

    Schnack, D. D.; Cheng, J.; Barnes, D. C.; Parker, S. E.

    2013-06-01

    We perform linear stability studies of the ion temperature gradient (ITG) instability in unsheared slab geometry using kinetic and extended magnetohydrodynamics (MHD) models, in the regime k∥/k⊥≪1. The ITG is a parallel (to B) sound wave that may be destabilized by finite ion Larmor radius (FLR) effects in the presence of a gradient in the equilibrium ion temperature. The ITG is stable in both ideal and resistive MHD; for a given temperature scale length LTi0, instability requires that either k⊥ρi or ρi/LTi0 be sufficiently large. Kinetic models capture FLR effects to all orders in either parameter. In the extended MHD model, these effects are captured only to lowest order by means of the Braginskii ion gyro-viscous stress tensor and the ion diamagnetic heat flux. We present the linear electrostatic dispersion relations for the ITG for both kinetic Vlasov and extended MHD (two-fluid) models in the local approximation. In the low frequency fluid regime, these reduce to the same cubic equation for the complex eigenvalue ω =ωr+iγ. An explicit solution is derived for the growth rate and real frequency in this regime. These are found to depend on a single non-dimensional parameter. We also compute the eigenvalues and the eigenfunctions with the extended MHD code NIMROD, and a hybrid kinetic δf code that assumes six-dimensional Vlasov ions and isothermal fluid electrons, as functions of k⊥ρi and ρi/LTi0 using a spatially dependent equilibrium. These solutions are compared with each other, and with the predictions of the local kinetic and fluid dispersion relations. Kinetic and fluid calculations agree well at and near the marginal stability point, but diverge as k⊥ρi or ρi/LTi0 increases. There is good qualitative agreement between the models for the shape of the unstable global eigenfunction for LTi0/ρi=30 and 20. The results quantify how far fluid calculations can be extended accurately into the kinetic regime. We conclude that for the linear ITG

  17. The method of characteristics and computational fluid dynamics applied to the prediction of underexpanded jet flows in annular geometry

    NASA Astrophysics Data System (ADS)

    Kim, Sangwon

    2005-11-01

    High pressure (3.4 MPa) injection from a shroud valve can improve natural gas engine efficiency by enhancing fuel-air mixing. Since the fuel jet issuing from the shroud valve has a nearly annular jet flow configuration, it is necessary to analyze the annular jet flow to understand the fuel jet behavior in the mixing process and to improve the shroud design for better mixing. The method of characteristics (MOC) was used as the primary modeling algorithm in this work and Computational Fluid Dynamics (CFD) was used primarily to validate the MOC results. A consistent process for dealing with the coalescence of compression characteristic lines into a shock wave during the MOC computation was developed. By the application of shock polar in the pressure-flow angle plane to the incident shock wave for an axisymmetric underexpanded jet and the comparison with the triple point location found in experimental results, it was found that, in the static pressure ratios of 2--50, a triple point of the jet was located at the point where the flow angle after the incident shock became -5° relative to the axis and this point was situated between the von Neumann and detachment criteria on the incident shock. MOC computations of the jet flow with annular geometry were performed for pressure ratios of 10 and 20 with rannulus = 10--50 units, Deltar = 2 units. In this pressure ratio range, the MOC results did not predict a Mach disc in the core flow of the annular jet, but did indicate the formation of a Mach disc where the jet meets the axis of symmetry. The MOC results display the annular jet configurations clearly. Three types of nozzles for application to gas injectors (convergent-divergent nozzle, conical nozzle, and aerospike nozzle) were designed using the MOC and evaluated in on- and off-design conditions using CFD. The average axial momentum per unit mass was improved by 17 to 24% and the average kinetic energy per unit fuel mass was improved by 30 to 80% compared with a standard

  18. Fractal geometry as a new approach for proving nanosimilarity: a reflection note.

    PubMed

    Demetzos, Costas; Pippa, Natassa

    2015-04-10

    Nanosimilars are considered as new medicinal outcomes combining the generic drugs and the nanocarrier as an innovative excipient, in order to evaluate them as final products. They belong to the grey area - concerning the evaluation process - between generic drugs and biosimilar medicinal products. Generic drugs are well documented and a huge number of them are in market, replacing effectively the off-patent drugs. The scientific approach for releasing them to the market is based on bioequivalence studies, which are well documented and accepted by the regulatory agencies. On the other hand, the structural complexity of biological/biotechnology-derived products demands a new approach for the approval process taking into consideration that bioequivalence studies are not considered as sufficient as in generic drugs, and new clinical trials are needed to support their approval process of the product to the market. In proportion, due to technological complexity of nanomedicines, the approaches for proving the statistical identity or the similarity for generic and biosimilar products, respectively, with those of prototypes, are not considered as effective for nanosimilar products. The aim of this note is to propose a complementary approach which can provide realistic evidences concerning the nanosimilarity, based on fractal analysis. This approach is well fit with the structural complexity of nanomedicines and smooths the difficulties for proving the similarity between off-patent and nanosimilar products. Fractal analysis could be considered as the approach that completely characterizes the physicochemical/morphological characteristics of nanosimilar products and could be proposed as a start point for a deep discussion on nanosimilarity.

  19. Novel Approaches to Quantum Computation Using Solid State Qubits

    DTIC Science & Technology

    2007-12-31

    Han, A scheme for the teleportation of multiqubit quantum information via the control of many agents in a network, submitted to Phys. Lett. A, 343...approach, Phys. Rev. B 70, 094513 (2004). 22. C.-P. Yang, S.-I. Chu, and S. Han, Efficient many party controlled teleportation of multiqubit quantum ...June 1, 2001- September 30, 2007 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER "Novel Approaches to Quantum Computation Using Solid State Qubits" F49620

  20. General Approach in Computing Sums of Products of Binary Sequences

    DTIC Science & Technology

    2011-12-08

    General Approach in Computing Sums of Products of Binary Sequences E. Kiliç1, P. Stănică2 1TOBB Economics and Technology University, Mathematics...pstanica@nps.edu December 8, 2011 Abstract In this paper we find a general approach to find closed forms of sums of products of arbitrary sequences ...satisfying the same recurrence with different initial conditions. We apply successfully our technique to sums of products of such sequences with indices in

  1. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation

    NASA Astrophysics Data System (ADS)

    Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina

    2016-06-01

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  2. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation.

    PubMed

    Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina

    2016-06-21

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  3. Diversifying Our Perspectives on Mathematics about Space and Geometry: An Ecocultural Approach

    ERIC Educational Resources Information Center

    Owens, Kay

    2014-01-01

    School mathematics tends to have developed from the major cultures of Asia, the Mediterranean and Europe. However, indigenous cultures in particular may have distinctly different systematic ways of referring to space and thinking mathematically about spatial activity. Their approaches are based on the close link between the environment and…

  4. Connecting Geometry and Chemistry: A Three-Step Approach to Three-Dimensional Thinking

    ERIC Educational Resources Information Center

    Donaghy, Kelley J.; Saxton, Kathleen J.

    2012-01-01

    A three-step active-learning approach is described to enhance the spatial abilities of general chemistry students with respect to three-dimensional molecular drawing and visualization. These activities are used in a medium-sized lecture hall with approximately 150 students in the first semester of the general chemistry course. The first activity…

  5. Ring polymer chains confined in a slit geometry of two parallel walls: the massive field theory approach

    NASA Astrophysics Data System (ADS)

    Usatenko, Z.; Halun, J.

    2017-01-01

    The investigation of a dilute solution of phantom ideal ring polymer chains confined in a slit geometry of two parallel repulsive walls, two inert walls, and for the mixed case of one inert and the other one repulsive wall, was performed. Taking into account the well known correspondence between the field theoretical {φ4} O(n)-vector model in the limit n\\to 0 and the behaviour of long-flexible polymer chains in a good solvent, the investigation of a dilute solution of long-flexible ring polymer chains with the excluded volume interaction (EVI) confined in a slit geometry of two parallel repulsive walls was performed in the framework of the massive field theory approach at fixed space dimensions d  =  3 up to one-loop order. For all the above mentioned cases, the correspondent depletion interaction potentials, the depletion forces and the forces which exert the phantom ideal ring polymers and the ring polymers with the EVI on the walls were calculated, respectively. The obtained results indicate that the phantom ideal ring polymer chains and the ring polymer chains with the EVI due to the complexity of chain topology and because of the entropical reason demonstrate completely different behaviour in confined geometries than linear polymer chains. For example, the phantom ideal ring polymers prefer to escape from the space not only between two repulsive walls but also in the case of two inert walls, which leads to the attractive depletion forces. The ring polymer chains with less complex knot types (with the bigger radius of gyration) in a ring topology in the wide slit region exert higher forces on the confining repulsive walls. The depletion force in the case of mixed boundary conditions becomes repulsive in contrast to the case of linear polymer chains.

  6. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  7. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  8. WebMTA: a web-interface for ab initio geometry optimization of large molecules using molecular tailoring approach.

    PubMed

    Kavathekar, Ritwik; Khire, Subodh; Ganesh, V; Rahalkar, Anuja P; Gadre, Shridhar R

    2009-05-01

    A web-interface for geometry optimization of large molecules using a linear scaling method, i.e., cardinality guided molecular tailoring approach (CG-MTA), is presented. CG-MTA is a cut-and-stitch, fragmentation-based method developed in our laboratory, for linear scaling of conventional ab initio techniques. This interface provides limited access to CG-MTA-enabled GAMESS. It can be used to obtain fragmentation schemes for a given spatially extended molecule depending on the maximum allowed fragment size and minimum cut radius values provided by the user. Currently, we support submission of single point or geometry optimization jobs at Hartree-Fock and density functional theory levels of theory for systems containing between 80 to 200 first row atoms and comprising up to 1000 basis functions. The graphical user interface is built using HTML and Python at the back end. The back end farms out the jobs on an in-house Linux-based cluster running on Pentium-4 Class or higher machines using an @Home-based parallelization scheme (http://chem.unipune.ernet.in/ approximately tcg/mtaweb/).

  9. Predicting relative permeability from water retention: A direct approach based on fractal geometry

    NASA Astrophysics Data System (ADS)

    Cihan, Abdullah; Tyner, John S.; Perfect, Edmund

    2009-04-01

    Commonly, a soil's relative permeability curve is predicted from its measured water retention curve by fitting equations that share parameters between the two curves (e.g., Brooks/Corey-Mualem and van Genuchten-Mualem). We present a new approach to predict relative permeability by direct application of measured soil water retention data without any fitting procedures. The new relative permeability model, derived from a probabilistic fractal approach, appears in series form as a function of suction and the incremental change in water content. This discrete approach describes the drained pore space and permeability at different suctions incorporating the effects of both pore size distribution and connectivity among water-filled pores. We compared the new model performance predicting relative permeability to that of the van Genuchten-Mualem (VG-M) model for 35 paired data sets from the Unsaturated Soil hydraulic Database (UNSODA) and five other previously published data sets. At the 5% level of significance, the new method predicts relative permeabilities from the UNSODA database significantly better (mean logarithmic root-mean-square error, LRMSE = 0.813) than the VG-M model (LRMSE = 1.555). Each prediction of relative permeability from the five other previously published data sets was also significantly better.

  10. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    SciTech Connect

    Kim, K; Kim, D; Kim, T; Kang, S; Cho, M; Shin, D; Suh, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array which have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A

  11. Computational intelligence approaches for pattern discovery in biological systems.

    PubMed

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  12. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  13. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    PubMed

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  14. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  15. Creation of an idealized nasopharynx geometry for accurate computational fluid dynamics simulations of nasal airflow in patient-specific models lacking the nasopharynx anatomy.

    PubMed

    A T Borojeni, Azadeh; Frank-Ito, Dennis O; Kimbell, Julia S; Rhee, John S; Garcia, Guilherme J M

    2016-08-15

    Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the cone beam CT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically accurate models of the nasopharynx created from 30 CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 nasal airway obstruction patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx.

  16. Numerical approach to reproduce instabilities of partial cavitation in a Venturi 8° geometry

    NASA Astrophysics Data System (ADS)

    Charriere, Boris; Goncalves, Eric

    2016-11-01

    Unsteady partial cavitation is mainly formed by an attached cavity which present periodic oscillations. Under certain conditions, the instabilities are characterized by the formation of vapour clouds, convected downstream the cavity and which collapse in higher pressure region. In order to gain a better understanding of the complex physics involved, many experimental and numerical studies have been carried out. These identified two main mechanisms responsible for the break-off cycles. The development of a liquid re-entrant jet is the most common type of instabilities, but more recently, the role of pressure waves created by the cloud collapses has been highlighted. This paper presents a one-fluid compressible Reynolds- Averaged NavierStokes (RANS) solver closed by two different equations of state (EOS) for the mixture. Based on experimental data, we investigate the ability for our simulations to reproduce the instablities of a self-sustained oscillating cavitation pocket. Two cavitation models are firstly compared. The importance of considering a non-equilibrium state for the vapour phase is also exhibited. To finish, the role played by the added transport equation to compute void ratio is emphasised. In case of partially cavitating flows with detached cavitation clouds, the reproduction of convective mechanisms is clearly improved.

  17. Non-invasive Assessment of Lower Limb Geometry and Strength Using Hip Structural Analysis and Peripheral Quantitative Computed Tomography: A Population-Based Comparison.

    PubMed

    Litwic, A E; Clynes, M; Denison, H J; Jameson, K A; Edwards, M H; Sayer, A A; Taylor, P; Cooper, C; Dennison, E M

    2016-02-01

    Hip fracture is the most significant complication of osteoporosis in terms of mortality, long-term disability and decreased quality of life. In the recent years, different techniques have been developed to assess lower limb strength and ultimately fracture risk. Here we examine relationships between two measures of lower limb bone geometry and strength; proximal femoral geometry and tibial peripheral quantitative computed tomography. We studied a sample of 431 women and 488 men aged in the range 59-71 years. The hip structural analysis (HSA) programme was employed to measure the structural geometry of the left hip for each DXA scan obtained using a Hologic QDR 4500 instrument while pQCT measurements of the tibia were obtained using a Stratec 2000 instrument in the same population. We observed strong sex differences in proximal femoral geometry at the narrow neck, intertrochanteric and femoral shaft regions. There were significant (p < 0.001) associations between pQCT-derived measures of bone geometry (tibial width; endocortical diameter and cortical thickness) and bone strength (strength strain index) with each corresponding HSA variable (all p < 0.001) in both men and women. These results demonstrate strong correlations between two different methods of assessment of lower limb bone strength: HSA and pQCT. Validation in prospective cohorts to study associations of each with incident fracture is now indicated.

  18. Topological expansion of the β-ensemble model and quantum algebraic geometry in the sectorwise approach

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Eynard, B.; Marchal, O.

    2011-02-01

    We construct the solution of the loop equations of the β-ensemble model in a form analogous to the solution in the case of the Hermitian matrices β = 1. The solution for β = 1 is expressed in terms of the algebraic spectral curve given by y2 = U(x). The spectral curve for arbitrary β converts into the Schrödinger equation (ħ∂)2 - U(x) ψ(x) = 0, where ħ ∝ {{( {{{sqrt β - 1} {sqrt β }}} {{{sqrt β - 1} {sqrt β }}} {sqrt β }}} )} N}}. The basic ingredients of the method based on the algebraic solution retain their meaning, but we use an alternative approach to construct a solution of the loop equations in which the resolvents are given separately in each sector. Although this approach turns out to be more involved technically, it allows consistently defining the B-cycle structure for constructing the quantum algebraic curve (a D-module of the form y2 - U(x), where [y, x] = ħ) and explicitly writing the correlation functions and the corresponding symplectic invariants Fh or the terms of the free energy in an 1/N2-expansion at arbitrary ħ. The set of "flat" coordinates includes the potential times tk and the occupation numbers tilde \\varepsilon _α . We define and investigate the properties of the A- and B-cycles, forms of the first, second, and third kinds, and the Riemann bilinear identities. These identities allow finding the singular part of mathcal{F}_0 , which depends only on tilde \\varepsilon _α.

  19. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  20. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  1. The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.

    ERIC Educational Resources Information Center

    Bronson, Richard

    1986-01-01

    Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)

  2. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  3. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride.

    PubMed

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  4. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  5. Protein Engineering by Combined Computational and In Vitro Evolution Approaches.

    PubMed

    Rosenfeld, Lior; Heyne, Michael; Shifman, Julia M; Papo, Niv

    2016-05-01

    Two alternative strategies are commonly used to study protein-protein interactions (PPIs) and to engineer protein-based inhibitors. In one approach, binders are selected experimentally from combinatorial libraries of protein mutants that are displayed on a cell surface. In the other approach, computational modeling is used to explore an astronomically large number of protein sequences to select a small number of sequences for experimental testing. While both approaches have some limitations, their combination produces superior results in various protein engineering applications. Such applications include the design of novel binders and inhibitors, the enhancement of affinity and specificity, and the mapping of binding epitopes. The combination of these approaches also aids in the understanding of the specificity profiles of various PPIs.

  6. Dynamics and friction drag behavior of viscoelastic flows in complex geometries: A multiscale simulation approach

    NASA Astrophysics Data System (ADS)

    Koppol, Anantha Padmanabha Rao

    Flows of viscoelastic polymeric fluids are of great fundamental and practical interest as polymeric materials for commodity and value-added products are processed typically in a fluid state. The nonlinear coupling between fluid motion and microstructure, which results in highly non-Newtonian theology, memory/relaxation and normal stress development or tension along streamlines, greatly complicates the analysis, design and control of such flows. This has posed tremendous challenges to researchers engaged in developing first principles models and simulations that can accurately and robustly predict the dynamical behavior of polymeric flows. Despite this, the past two decades have witnessed several significant advances towards accomplishing this goal. Yet a problem of fundamental and great pragmatic interest has defied solution to years of ardent research by several groups, namely the relationship between friction drag and flow rate in inertialess flows of highly elastic polymer solutions in complex kinematics flows. First principles-based solution of this long-standing problem in non-Newtonian fluid mechanics is the goal of this research. To achieve our objective, it is essential to develop the capability to perform large-scale multiscale simulations, which integrate continuum-level finite element solvers for the conservation of mass and momentum with fast integrators of stochastic differential equations that describe the evolution of polymer configuration. Hence, in this research we have focused our attention on development of a parallel, multiscale simulation algorithm that is capable of robustly and efficiently simulating complex kinematics flows of dilute polymeric solutions using the first principles based bead-spring chain description of the polymer molecules. The fidelity and computational efficiency of the algorithm has been demonstrated via three benchmark flow problems, namely, the plane Couette flow, the Poiseuille flow and the 4:1:4 axisymmetric

  7. A computer-aided approach to nonlinear control systhesis

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Anthony, Tobin

    1988-01-01

    The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.

  8. Style: A Computational and Conceptual Blending-Based Approach

    NASA Astrophysics Data System (ADS)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  9. The Visualization Management System Approach To Visualization In Scientific Computing

    NASA Astrophysics Data System (ADS)

    Butler, David M.; Pendley, Michael H.

    1989-09-01

    We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.

  10. Beyond the Melnikov method: A computer assisted approach

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Zgliczyński, Piotr

    2017-01-01

    We present a Melnikov type approach for establishing transversal intersections of stable/unstable manifolds of perturbed normally hyperbolic invariant manifolds (NHIMs). The method is based on a new geometric proof of the normally hyperbolic invariant manifold theorem, which establishes the existence of a NHIM, together with its associated invariant manifolds and bounds on their first and second derivatives. We do not need to know the explicit formulas for the homoclinic orbits prior to the perturbation. We also do not need to compute any integrals along such homoclinics. All needed bounds are established using rigorous computer assisted numerics. Lastly, and most importantly, the method establishes intersections for an explicit range of parameters, and not only for perturbations that are 'small enough', as is the case in the classical Melnikov approach.

  11. A computational approach for the health care market.

    PubMed

    Montefiori, Marcello; Resta, Marina

    2009-12-01

    In this work we analyze the market for health care through a computational approach that relies on Kohonen's Self-Organizing Maps, and we observe the competition dynamics of health care providers versus those of patients. As a result, we offer a new tool addressing the issue of hospital behaviour and demand mechanism modelling, which conjugates a robust theoretical implementation together with an instrument of deep graphical impact.

  12. Computer-based Approaches for Training Interactive Digital Map Displays

    DTIC Science & Technology

    2005-09-01

    SUPPLEMENTARY NOTES Subject Matter POC: Jean L. Dyer 14. ABSTRACT (Maximum 200 words): Five computer-based training approaches for learning digital skills...Training assessment Exploratory Learning Guided ExploratoryTraining Guided Discovery SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21...the other extreme of letting Soldiers learn a digital interface on their own. The research reported here examined these two conditions and three other

  13. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-12-31

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  14. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  15. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  16. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    PubMed

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  17. Examples of computational approaches for elliptic, possibly multiscale PDEs with random inputs

    NASA Astrophysics Data System (ADS)

    Le Bris, Claude; Legoll, Frédéric

    2017-01-01

    We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.

  18. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium

    NASA Astrophysics Data System (ADS)

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-01

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91 kJ/mol and 114.32 kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99 kJ/mol and 117.08 kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29 Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22 Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions

  19. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium.

    PubMed

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-25

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91kJ/mol and 114.32kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99kJ/mol and 117.08kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions when

  20. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  1. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  2. Uranyl-glycine-water complexes in solution: comprehensive computational modeling of coordination geometries, stabilization energies, and luminescence properties.

    PubMed

    Su, Jing; Zhang, Kai; Schwarz, W H Eugen; Li, Jun

    2011-03-21

    Comprehensive computational modeling of coordination structures, thermodynamic stabilities, and luminescence spectra of uranyl-glycine-water complexes [UO(2)(Gly)(n)aq(m)](2+) (Gly = glycine, aq = H(2)O, n = 0-2, m = 0-5) in aqueous solution has been carried out using relativistic density functional approaches. The solvent is approximated by a dielectric continuum model and additional explicit water molecules. Detailed pictures are obtained by synergic combination of experimental and theoretical data. The optimal equatorial coordination numbers of uranyl are determined to be five. The energies of several complex conformations are competitively close to each other. In non-basic solution the most probable complex forms are those with two water ligands replaced by the bidentate carboxyl groups of zwitterionic glycine. The N,O-chelation in non-basic solution is neither entropically nor enthalpically favored. The symmetric and antisymmetric stretch vibrations of the nearly linear O-U-O unit determine the luminescence features. The shapes of the vibrationally resolved experimental solution spectra are reproduced theoretically with an empirically fitted overall line-width parameter. The calculated luminescence origins correspond to thermally populated, near-degenerate groups of the lowest electronically excited states of (3)Δ(g) and (3)Φ(g) character, originating from (U-O)σ(u) → (U-5f)δ(u),ϕ(u) configurations of the linear [OUO](2+) unit. The intensity distributions of the vibrational progressions are consistent with U-O bond-length changes around 5 1/2 pm. The unusually high intensity of the short wavelength foot is explained by near-degeneracy of vibrationally and electronically excited states, and by intensity enhancement through the asymmetric O-U-O stretch mode. The combination of contemporary computational chemistry and experimental techniques leads to a detailed understanding of structures, thermodynamics, and luminescence of actinide compounds, including

  3. A Computer Vision Approach to Identify Einstein Rings and Arcs

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  4. A Computer Code for 2-D Transport Calculations in x-y Geometry Using the Interface Current Method.

    SciTech Connect

    1990-12-01

    Version 00 RICANT performs 2-dimensional neutron transport calculations in x-y geometry using the interface current method. In the interface current method, the angular neutron currents crossing region surfaces are expanded in terms of the Legendre polynomials in the two half-spaces made by the region surfaces.

  5. Computational neuroscience approach to biomarkers and treatments for mental disorders.

    PubMed

    Yahata, Noriaki; Kasai, Kiyoto; Kawato, Mitsuo

    2017-04-01

    Psychiatry research has long experienced a stagnation stemming from a lack of understanding of the neurobiological underpinnings of phenomenologically defined mental disorders. Recently, the application of computational neuroscience to psychiatry research has shown great promise in establishing a link between phenomenological and pathophysiological aspects of mental disorders, thereby recasting current nosology in more biologically meaningful dimensions. In this review, we highlight recent investigations into computational neuroscience that have undertaken either theory- or data-driven approaches to quantitatively delineate the mechanisms of mental disorders. The theory-driven approach, including reinforcement learning models, plays an integrative role in this process by enabling correspondence between behavior and disorder-specific alterations at multiple levels of brain organization, ranging from molecules to cells to circuits. Previous studies have explicated a plethora of defining symptoms of mental disorders, including anhedonia, inattention, and poor executive function. The data-driven approach, on the other hand, is an emerging field in computational neuroscience seeking to identify disorder-specific features among high-dimensional big data. Remarkably, various machine-learning techniques have been applied to neuroimaging data, and the extracted disorder-specific features have been used for automatic case-control classification. For many disorders, the reported accuracies have reached 90% or more. However, we note that rigorous tests on independent cohorts are critically required to translate this research into clinical applications. Finally, we discuss the utility of the disorder-specific features found by the data-driven approach to psychiatric therapies, including neurofeedback. Such developments will allow simultaneous diagnosis and treatment of mental disorders using neuroimaging, thereby establishing 'theranostics' for the first time in clinical

  6. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  7. Conversion Coefficients for Proton Beams using Standing and Sitting Male Hybrid Computational Phantom Calculated in Idealized Irradiation Geometries.

    PubMed

    Alves, M C; Santos, W S; Lee, C; Bolch, W E; Hunt, J G; Júnior, A B Carvalho

    2016-09-24

    The aim of this study was the calculation of conversion coefficients for absorbed doses per fluence (DT/Φ) using the sitting and standing male hybrid phantom (UFH/NCI) exposure to monoenergetic protons with energy ranging from 2 MeV to 10 GeV. Sex-averaged effective dose per fluence (E/Φ) using the results of DT/Φ for the male and female hybrid phantom in standing and sitting postures were also calculated. Results of E/Φ of UFH/NCI standing phantom were also compared with tabulated effective dose conversion coefficients provided in ICRP publication 116. To develop an exposure scenario implementing the male UFH/NCI phantom in sitting and standing postures was used the radiation transport code MCNPX. Whole-body irradiations were performed using the recommended irradiation geometries by ICRP publication 116 antero-posterior (AP), postero-anterior (PA), right and left lateral, rotational (ROT) and isotropic (ISO). In most organs, the conversion coefficients DT/Φ were similar for both postures. However, relative differences were significant for organs located in the lower abdominal region, such as prostate, testes and urinary bladder, especially in the AP geometry. Results of effective dose conversion coefficients were 18% higher in the standing posture of the UFH/NCI phantom, especially below 100 MeV in AP and PA. In lateral geometry, the conversion coefficients values below 20 MeV were 16% higher in the sitting posture. In ROT geometry, the differences were below 10%, for almost all energies. In ISO geometry, the differences in E/Φ were negligible. The results of E/Φ of UFH/NCI phantom were in general below the results of the conversion coefficients provided in ICRP publication 116.

  8. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  9. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    PubMed Central

    Beatty, Perrin H.; Klein, Matthias S.; Fischer, Jeffrey J.; Lewis, Ian A.; Muench, Douglas G.; Good, Allen G.

    2016-01-01

    A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE) in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields. PMID:27735856

  10. A GPU-computing Approach to Solar Stokes Profile Inversion

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.; Mighell, Kenneth J.

    2012-09-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  11. Computing electronic structures: A new multiconfiguration approach for excited states

    NASA Astrophysics Data System (ADS)

    Cancès, Éric; Galicher, Hervé; Lewin, Mathieu

    2006-02-01

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latters. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H2 molecule.

  12. Solubility of nonelectrolytes: a first-principles computational approach.

    PubMed

    Jackson, Nicholas E; Chen, Lin X; Ratner, Mark A

    2014-05-15

    Using a combination of classical molecular dynamics and symmetry adapted intermolecular perturbation theory, we develop a high-accuracy computational method for examining the solubility energetics of nonelectrolytes. This approach is used to accurately compute the cohesive energy density and Hildebrand solubility parameters of 26 molecular liquids. The energy decomposition of symmetry adapted perturbation theory is then utilized to develop multicomponent Hansen-like solubility parameters. These parameters are shown to reproduce the solvent categorizations (nonpolar, polar aprotic, or polar protic) of all molecular liquids studied while lending quantitative rigor to these qualitative categorizations via the introduction of simple, easily computable parameters. Notably, we find that by monitoring the first-order exchange energy contribution to the total interaction energy, one can rigorously determine the hydrogen bonding character of a molecular liquid. Finally, this method is applied to compute explicitly the Flory interaction parameter and the free energy of mixing for two different small molecule mixtures, reproducing the known miscibilities. This methodology represents an important step toward the prediction of molecular solubility from first principles.

  13. Computational approaches in the design of synthetic receptors - A review.

    PubMed

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology.

  14. Computing electronic structures: A new multiconfiguration approach for excited states

    SciTech Connect

    Cances, Eric . E-mail: cances@cermics.enpc.fr; Galicher, Herve . E-mail: galicher@cermics.enpc.fr; Lewin, Mathieu . E-mail: lewin@cermic.enpc.fr

    2006-02-10

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H {sub 2} molecule.

  15. Computational modeling of an endovascular approach to deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Teplitzky, Benjamin A.; Connolly, Allison T.; Bajwa, Jawad A.; Johnson, Matthew D.

    2014-04-01

    Objective. Deep brain stimulation (DBS) therapy currently relies on a transcranial neurosurgical technique to implant one or more electrode leads into the brain parenchyma. In this study, we used computational modeling to investigate the feasibility of using an endovascular approach to target DBS therapy. Approach. Image-based anatomical reconstructions of the human brain and vasculature were used to identify 17 established and hypothesized anatomical targets of DBS, of which five were found adjacent to a vein or artery with intraluminal diameter ≥1 mm. Two of these targets, the fornix and subgenual cingulate white matter (SgCwm) tracts, were further investigated using a computational modeling framework that combined segmented volumes of the vascularized brain, finite element models of the tissue voltage during DBS, and multi-compartment axon models to predict the direct electrophysiological effects of endovascular DBS. Main results. The models showed that: (1) a ring-electrode conforming to the vessel wall was more efficient at neural activation than a guidewire design, (2) increasing the length of a ring-electrode had minimal effect on neural activation thresholds, (3) large variability in neural activation occurred with suboptimal placement of a ring-electrode along the targeted vessel, and (4) activation thresholds for the fornix and SgCwm tracts were comparable for endovascular and stereotactic DBS, though endovascular DBS was able to produce significantly larger contralateral activation for a unilateral implantation. Significance. Together, these results suggest that endovascular DBS can serve as a complementary approach to stereotactic DBS in select cases.

  16. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  17. Steady-State Fluorescence of Highly Absorbing Samples in Transmission Geometry: A Simplified Quantitative Approach Considering Reabsorption Events.

    PubMed

    Krimer, Nicolás I; Rodrigues, Darío; Rodríguez, Hernán B; Mirenda, Martín

    2017-01-03

    A simplified methodology to acquire steady-state emission spectra and quantum yields of highly absorbing samples is presented. The experimental setup consists of a commercial spectrofluorometer adapted to transmission geometry, allowing the detection of the emitted light at 180° with respect to the excitation beam. The procedure includes two different mathematical approaches to describe and reproduce the distortions caused by reabsorption on emission spectra and quantum yields. Toluene solutions of 9,10-diphenylanthracence, DPA, with concentrations ranging between 1.12 × 10(-5) and 1.30 × 10(-2) M, were used to validate the proposed methodology. This dye has significant probability of reabsorption and re-emission in concentrated solutions without showing self-quenching or aggregation phenomena. The results indicate that the reabsorption corrections, applied on molecular emission spectra and quantum yields of the samples, accurately reproduce experimental data. A further discussion is performed concerning why the re-emitted radiation is not detected in the experiments, even at the highest DPA concentrations.

  18. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    PubMed

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone".

  19. Slide Star: An Approach to Videodisc/Computer Aided Instruction

    PubMed Central

    McEnery, Kevin W.

    1984-01-01

    One of medical education's primary goals is for the student to be proficient in the gross and microscopic identification of disease. The videodisc, with its storage capacity of up to 54,000 photomicrographs is ideally suited to assist in this educational process. “Slide Star” is a method of interactive instruction which is designed for use in any subject where it is essential to identify visual material. The instructional approach utilizes a computer controlled videodisc to display photomicrographs. In the demonstration program, these are slides of normal blood cells. The program is unique in that the instruction is created by the student's commands manipulating the photomicrograph data base. A prime feature is the use of computer generated multiple choice questions to reinforce the learning process.

  20. Analytical and computational approaches to define the Aspergillus niger secretome

    SciTech Connect

    Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin; Panisko, Ellen A.; Baker, Scott E.

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.

  1. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  2. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  3. A pencil beam approach to proton computed tomography

    SciTech Connect

    Rescigno, Regina Bopp, Cécile; Rousseau, Marc; Brasse, David

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  4. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    SciTech Connect

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm) and the field integral of the solution (L{sup 2} norm).

  5. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  6. Physiologically based computational approach to camouflage and masking patterns

    NASA Astrophysics Data System (ADS)

    Irvin, Gregg E.; Dowler, Michael G.

    1992-09-01

    A computational system was developed to integrate both Fourier image processing techniques and biologically based image processing techniques. The Fourier techniques allow the spatially global manipulation of phase and amplitude spectra. The biologically based techniques allow for spatially localized manipulation of phase, amplitude and orientation independently on multiple spatial frequency scales. These techniques combined with a large variety of basic image processing functions allow for a versatile and systematic approach to be taken toward the development of specialized patterning and visual textures. Current applications involve research for the development of 2-dimensional spatial patterning that can function as effective camouflage patterns and masking patterns for the human visual system.

  7. Approaches to Computer Modeling of Phosphate Hide-Out.

    DTIC Science & Technology

    1984-06-28

    phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C

  8. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  9. A Computational Approach for Identifying Synergistic Drug Combinations

    PubMed Central

    Gayvert, Kaitlyn M.; Aly, Omar; Bosenberg, Marcus W.; Stern, David F.; Elemento, Olivier

    2017-01-01

    A promising alternative to address the problem of acquired drug resistance is to rely on combination therapies. Identification of the right combinations is often accomplished through trial and error, a labor and resource intensive process whose scale quickly escalates as more drugs can be combined. To address this problem, we present a broad computational approach for predicting synergistic combinations using easily obtainable single drug efficacy, no detailed mechanistic understanding of drug function, and limited drug combination testing. When applied to mutant BRAF melanoma, we found that our approach exhibited significant predictive power. Additionally, we validated previously untested synergy predictions involving anticancer molecules. As additional large combinatorial screens become available, this methodology could prove to be impactful for identification of drug synergy in context of other types of cancers. PMID:28085880

  10. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Junaid, Ali Khan; Muhammad, Asif Zahoor Raja; Ijaz Mansoor, Qureshi

    2011-02-01

    We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  11. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  12. Learning about modes of speciation by computational approaches.

    PubMed

    Becquet, Céline; Przeworski, Molly

    2009-10-01

    How often do the early stages of speciation occur in the presence of gene flow? To address this enduring question, a number of recent papers have used computational approaches, estimating parameters of simple divergence models from multilocus polymorphism data collected in closely related species. Applications to a variety of species have yielded extensive evidence for migration, with the results interpreted as supporting the widespread occurrence of parapatric speciation. Here, we conduct a simulation study to assess the reliability of such inferences, using a program that we recently developed MCMC estimation of the isolation-migration model allowing for recombination (MIMAR) as well as the program isolation-migration (IM) of Hey and Nielsen (2004). We find that when one of many assumptions of the isolation-migration model is violated, the methods tend to yield biased estimates of the parameters, potentially lending spurious support for allopatric or parapatric divergence. More generally, our results highlight the difficulty in drawing inferences about modes of speciation from the existing computational approaches alone.

  13. Computational Approach to Dendritic Spine Taxonomy and Shape Transition Analysis

    PubMed Central

    Bokota, Grzegorz; Magnowska, Marta; Kuśmierczyk, Tomasz; Łukasik, Michał; Roszkowska, Matylda; Plewczynski, Dariusz

    2016-01-01

    The common approach in morphological analysis of dendritic spines of mammalian neuronal cells is to categorize spines into subpopulations based on whether they are stubby, mushroom, thin, or filopodia shaped. The corresponding cellular models of synaptic plasticity, long-term potentiation, and long-term depression associate the synaptic strength with either spine enlargement or spine shrinkage. Although a variety of automatic spine segmentation and feature extraction methods were developed recently, no approaches allowing for an automatic and unbiased distinction between dendritic spine subpopulations and detailed computational models of spine behavior exist. We propose an automatic and statistically based method for the unsupervised construction of spine shape taxonomy based on arbitrary features. The taxonomy is then utilized in the newly introduced computational model of behavior, which relies on transitions between shapes. Models of different populations are compared using supplied bootstrap-based statistical tests. We compared two populations of spines at two time points. The first population was stimulated with long-term potentiation, and the other in the resting state was used as a control. The comparison of shape transition characteristics allowed us to identify the differences between population behaviors. Although some extreme changes were observed in the stimulated population, statistically significant differences were found only when whole models were compared. The source code of our software is freely available for non-commercial use1. Contact: d.plewczynski@cent.uw.edu.pl. PMID:28066226

  14. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications.

  15. Computational approaches to understand cardiac electrophysiology and arrhythmias

    PubMed Central

    Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.

    2012-01-01

    Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409

  16. Computational inference of gene regulatory networks: Approaches, limitations and opportunities.

    PubMed

    Banf, Michael; Rhee, Seung Y

    2017-01-01

    Gene regulatory networks lie at the core of cell function control. In E. coli and S. cerevisiae, the study of gene regulatory networks has led to the discovery of regulatory mechanisms responsible for the control of cell growth, differentiation and responses to environmental stimuli. In plants, computational rendering of gene regulatory networks is gaining momentum, thanks to the recent availability of high-quality genomes and transcriptomes and development of computational network inference approaches. Here, we review current techniques, challenges and trends in gene regulatory network inference and highlight challenges and opportunities for plant science. We provide plant-specific application examples to guide researchers in selecting methodologies that suit their particular research questions. Given the interdisciplinary nature of gene regulatory network inference, we tried to cater to both biologists and computer scientists to help them engage in a dialogue about concepts and caveats in network inference. Specifically, we discuss problems and opportunities in heterogeneous data integration for eukaryotic organisms and common caveats to be considered during network model evaluation. This article is part of a Special Issue entitled: Plant Gene Regulatory Mechanisms and Networks, edited by Dr. Erich Grotewold and Dr. Nathan Springer.

  17. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  18. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  19. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  20. The mechanism of nitrogenase. Computed details of the site and geometry of binding of alkyne and alkene substrates and intermediates.

    PubMed

    Dance, Ian

    2004-09-29

    The chemical mechanism by which the enzyme nitrogenase effects the remarkable reduction of N(2) to NH(3) under ambient conditions continues to be enigmatic, because no intermediate has been observed directly. Recent experimental investigation of the enzymatic consequences of the valine --> alanine modification of residue alpha-70 of the component MoFe protein on the reduction of alkynes, together with EPR and ENDOR spectroscopic characterization of a trappable intermediate in the reduction of propargyl alcohol or propargyl amine (HCC[triple bond]C-CH(2)OH/NH(2)), has localized the site of binding and reduction of these substrates on the FeMo-cofactor and led to proposed eta(2)-Fe coordination geometry. Here these experimental data are modeled using density functional calculations of the allyl alcohol/amine intermediates and the propargyl alcohol/amine reactants coordinated to the FeMo-cofactor, together with force-field calculations of the interactions of these models with the surrounding MoFe protein. The results support and elaborate the earlier proposals, with the most probable binding site and geometry being eta(2)-coordination at Fe6 of the FeMo-cofactor (crystal structure in the Protein Database), in a position that is intermediate between the exo and endo coordination extremes at Fe6. The models described account for (1) the steric influence of the alpha-70 residue, (2) the crucial hydrogen bonding with Nepsilon of alpha-195(His), (3) the spectroscopic symmetry of the allyl-alcohol intermediate, and (4) the preferential stabilization of the allyl alcohol/amine relative to propargyl alcohol/amine. Alternative binding sites and geometries for ethyne and ethene, relevant to the wild-type protein, are described. This model defines the location and scene for detailed investigation of the mechanism of nitrogenase.

  1. Facilitating Understandings of Geometry.

    ERIC Educational Resources Information Center

    Pappas, Christine C.; Bush, Sara

    1989-01-01

    Illustrates some learning encounters for facilitating first graders' understanding of geometry. Describes some of children's approaches using Cuisenaire rods and teacher's intervening. Presents six problems involving various combinations of Cuisenaire rods and cubes. (YP)

  2. Proof in Transformation Geometry

    ERIC Educational Resources Information Center

    Bell, A. W.

    1971-01-01

    The first of three articles showing how inductively-obtained results in transformation geometry may be organized into a deductive system. This article discusses two approaches to enlargement (dilatation), one using coordinates and the other using synthetic methods. (MM)

  3. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  4. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  5. An adaptive three-dimensional Cartesian approach for the parallel computation of inviscid flow about static and dynamic configurations

    NASA Astrophysics Data System (ADS)

    Hunt, Jason Daniel

    An adaptive three-dimensional Cartesian approach for the parallel computation of compressible flow about static and dynamic configurations has been developed and validated. This is a further step towards a goal that remains elusive for CFD codes: the ability to model complex dynamic-geometry problems in a quick and automated manner. The underlying flow-solution method solves the three-dimensional Euler equations using a MUSCL-type finite-volume approach to achieve higher-order spatial accuracy. The flow solution, either steady or unsteady, is advanced in time via a two-stage time-stepping scheme. This basic solution method has been incorporated into a parallel block-adaptive Cartesian framework, using a block-octtree data structure to represent varying spatial resolution, and to compute flow solutions in parallel. The ability to represent static geometric configurations has been introduced by cutting a geometric configuration out of a background block-adaptive Cartesian grid, then solving for the flow on the resulting volume grid. This approach has been extended for dynamic geometric configurations: components of a given configuration were permitted to independently move, according to prescribed rigid-body motion. Two flow-solver difficulties arise as a result of introducing static and dynamic configurations: small time steps; and the disappearance/appearance of cell volume during a time integration step. Both of these problems have been remedied through cell merging. The concept of cell merging and its implementation within the parallel block-adaptive method is described. While the parallelization of certain grid-generation and cell-cutting routines resulted from this work, the most significant contribution was developing the novel cell-merging paradigm that was incorporated into the parallel block-adaptive framework. Lastly, example simulations both to validate the developed method and to demonstrate its full capabilities have been carried out. A simple, steady

  6. Integration of Computational Geometry, Finite Element, and Multibody System Algorithms for the Development of New Computational Methodology for High-Fidelity Vehicle Systems Modeling and Simulation

    DTIC Science & Technology

    2013-04-11

    suited for efficient communications with CAD systems. It is the main objective of phase I of this SBIR project to demonstrate the feasibility of...for efficient communications with CAD systems. It is the main objective of phase I of this SBIR project to demonstrate the feasibility of developing a...civilian wheeled and tracked vehicle models that include significant details. The new software technology will allow for: 1) preserving CAD geometry

  7. Integration of Computational Geometry, Finite Element, and Multibody System Algorithms for the Development of New Computational Methodology for High-Fidelity Vehicle Systems Modeling and Simulation. ADDENDUM

    DTIC Science & Technology

    2013-11-12

    suited for efficient communications with CAD systems. It is the main objective of phase I and Phase I Option of this SBIR project to demonstrate the...with CAD systems. It is the main objective of phase I and Phase I Option of this SBIR project to demonstrate the feasibility of developing a new MBS...wheeled and tracked vehicle models that include significant details. The new software technology will allow for: 1) preserving CAD geometry when FE

  8. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  9. Computational Approach to Structural Alerts: Furans, Phenols, Nitroaromatics, and Thiophenes.

    PubMed

    Dang, Na Le; Hughes, Tyler B; Miller, Grover P; Swamidass, S Joshua

    2017-03-14

    Structural alerts are commonly used in drug discovery to identify molecules likely to form reactive metabolites and thereby become toxic. Unfortunately, as useful as structural alerts are, they do not effectively model if, when, and why metabolism renders safe molecules toxic. Toxicity due to a specific structural alert is highly conditional, depending on the metabolism of the alert, the reactivity of its metabolites, dosage, and competing detoxification pathways. A systems approach, which explicitly models these pathways, could more effectively assess the toxicity risk of drug candidates. In this study, we demonstrated that mathematical models of P450 metabolism can predict the context-specific probability that a structural alert will be bioactivated in a given molecule. This study focuses on the furan, phenol, nitroaromatic, and thiophene alerts. Each of these structural alerts can produce reactive metabolites through certain metabolic pathways but not always. We tested whether our metabolism modeling approach, XenoSite, can predict when a given molecule's alerts will be bioactivated. Specifically, we used models of epoxidation, quinone formation, reduction, and sulfur-oxidation to predict the bioactivation of furan-, phenol-, nitroaromatic-, and thiophene-containing drugs. Our models separated bioactivated and not-bioactivated furan-, phenol-, nitroaromatic-, and thiophene-containing drugs with AUC performances of 100%, 73%, 93%, and 88%, respectively. Metabolism models accurately predict whether alerts are bioactivated and thus serve as a practical approach to improve the interpretability and usefulness of structural alerts. We expect that this same computational approach can be extended to most other structural alerts and later integrated into toxicity risk models. This advance is one necessary step toward our long-term goal of building comprehensive metabolic models of bioactivation and detoxification to guide assessment and design of new therapeutic

  10. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  11. Systems approaches to computational modeling of the oral microbiome

    PubMed Central

    Dimitrov, Dimiter V.

    2013-01-01

    Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet—oral microbiome—host mucosal transcriptome interactions. In particular, we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, and human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders. PMID:23847548

  12. Computational approaches to substrate-based cell motility

    NASA Astrophysics Data System (ADS)

    Ziebert, Falko; Aranson, Igor S.

    2016-07-01

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realisation of active, self-propelled 'particles', a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modelling on the whole-cell level. Here we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organised systems such as living cells.

  13. Local-basis-function approach to computed tomography

    NASA Astrophysics Data System (ADS)

    Hanson, K. M.; Wecksung, G. W.

    1985-12-01

    In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.

  14. Computational Diagnostic: A Novel Approach to View Medical Data.

    SciTech Connect

    Mane, K. K.; Börner, K.

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious choices about patient

  15. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.

  16. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  17. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen.

  18. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  19. Quasi-relativistic modeltotential approach. Spin-orbit effects on energies and geometries of several di- and tri-atomic molecules

    NASA Astrophysics Data System (ADS)

    Hafner, P.; Habitz, P.; Ishikawa, Y.; Wechsel-Trakowski, E.; Schwarz, W. H. E.

    1981-06-01

    Calculations on ground and valence-excited states of Au +2, Tl 2 and Pb 2, and on the ground states of HgCl 2, PbCl 2 and PbH 2 have teen performed within the Kramers-restricteu self-consistent-field approach using a quasi-relativitistic model-potential hamiltonian. The influence of spin—orbit coupling on molecular orbitals, bond energies and geometries is discussed.

  20. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    NASA Astrophysics Data System (ADS)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  1. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications

    NASA Astrophysics Data System (ADS)

    Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey

    2008-08-01

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy

  2. Slab-geometry Nd:glass laser performance studies

    NASA Technical Reports Server (NTRS)

    Eggleston, J. M.; Kane, T. J.; Byer, R. L.; Unternahrer, J.

    1982-01-01

    It is noted that slab-geometry solid-state lasers potentially provide significant performance improvements relative to conventional rod-geometry lasers. Experimental measurements that use an Nd:glass test-bed slab laser are presented. A comparison is made between the results and computer-model predictions of the slab-geometry approach. The computer model calculates and displays the temperature and stress fields in the slab, and on the basis of these predicts birefringence and index-of-refraction distributions. The effect that these distributions have on optical propagation is determined in a polarization-sensitive ray-tracing section of the model. Calculations are also made of stress-induced surface curvature and the resulting focusing effects. The measurements are found to be in good agreement with the computer-model predictions. It is concluded that the slab configuration offers significant laser-performance advantages in comparison with the traditional rod-laser geometry.

  3. Teaching of Geometry in Bulgaria

    ERIC Educational Resources Information Center

    Bankov, Kiril

    2013-01-01

    Geometry plays an important role in the school mathematics curriculum all around the world. Teaching of geometry varies a lot (Hoyls, Foxman, & Kuchemann, 2001). Many countries revise the objectives, the content, and the approaches to the geometry in school. Studies of the processes show that there are not common trends of these changes…

  4. Influence of LVAD cannula outflow tract location on hemodynamics in the ascending aorta: a patient-specific computational fluid dynamics approach.

    PubMed

    Karmonik, Christof; Partovi, Sasan; Loebe, Matthias; Schmack, Bastian; Ghodsizad, Ali; Robbin, Mark R; Noon, George P; Kallenbach, Klaus; Karck, Matthias; Davies, Mark G; Lumsden, Alan B; Ruhparwar, Arjang

    2012-01-01

    To develop a better understanding of the hemodynamic alterations in the ascending aorta, induced by variation of the cannula outflow position of the left ventricular assist device (LVAD) device based on patient-specific geometries, transient computational fluid dynamics (CFD) simulations using the realizable k-ε turbulent model were conducted for two of the most common LVAD outflow geometries. Thoracic aortic flow patterns, pressures, wall shear stresses (WSSs), turbulent dissipation, and energy were quantified in the ascending aorta at the location of the cannula outflow. Streamlines for the lateral geometry showed a large region of disturbed flow surrounding the LVAD outflow with an impingement zone at the contralateral wall exhibiting increased WSSs and pressures. Flow disturbance was reduced for the anterior geometries with clearly reduced pressures and WSSs. Turbulent dissipation was higher for the lateral geometry and turbulent energy was lower. Variation in the position of the cannula outflow clearly affects hemodynamics in the ascending aorta favoring an anterior geometry for a more ordered flow pattern. The new patient-specific approach used in this study for LVAD patients emphasizes the potential use of CFD as a truly translational technique.

  5. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    SciTech Connect

    Granovsky, Alexander A.

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  6. Computational Study on Subdural Cortical Stimulation - The Influence of the Head Geometry, Anisotropic Conductivity, and Electrode Configuration

    PubMed Central

    Kim, Donghyeon; Seo, Hyeon; Kim, Hyoung-Ihl; Jun, Sung Chan

    2014-01-01

    Subdural cortical stimulation (SuCS) is a method used to inject electrical current through electrodes beneath the dura mater, and is known to be useful in treating brain disorders. However, precisely how SuCS must be applied to yield the most effective results has rarely been investigated. For this purpose, we developed a three-dimensional computational model that represents an anatomically realistic brain model including an upper chest. With this computational model, we investigated the influence of stimulation amplitudes, electrode configurations (single or paddle-array), and white matter conductivities (isotropy or anisotropy). Further, the effects of stimulation were compared with two other computational models, including an anatomically realistic brain-only model and the simplified extruded slab model representing the precentral gyrus area. The results of voltage stimulation suggested that there was a synergistic effect with the paddle-array due to the use of multiple electrodes; however, a single electrode was more efficient with current stimulation. The conventional model (simplified extruded slab) far overestimated the effects of stimulation with both voltage and current by comparison to our proposed realistic upper body model. However, the realistic upper body and full brain-only models demonstrated similar stimulation effects. In our investigation of the influence of anisotropic conductivity, model with a fixed ratio (1∶10) anisotropic conductivity yielded deeper penetration depths and larger extents of stimulation than others. However, isotropic and anisotropic models with fixed ratios (1∶2, 1∶5) yielded similar stimulation effects. Lastly, whether the reference electrode was located on the right or left chest had no substantial effects on stimulation. PMID:25229673

  7. A systems approach to computer-based training

    NASA Technical Reports Server (NTRS)

    Drape, Gaylen W.

    1994-01-01

    This paper describes the hardware and software systems approach used in the Automated Recertification Training System (ARTS), a Phase 2 Small Business Innovation Research (SBIR) project for NASA Kennedy Space Center (KSC). The goal of this project is to optimize recertification training of technicians who process the Space Shuttle before launch by providing computer-based training courseware. The objectives of ARTS are to implement more effective CBT applications identified through a need assessment process and to provide an ehanced courseware production system. The system's capabilities are demonstrated by using five different pilot applications to convert existing classroom courses into interactive courseware. When the system is fully implemented at NASA/KSC, trainee job performance will improve and the cost of courseware development will be lower. Commercialization of the technology developed as part of this SBIR project is planned for Phase 3. Anticipated spin-off products include custom courseware for technical skills training and courseware production software for use by corporate training organizations of aerospace and other industrial companies.

  8. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  9. A Computational Drug Repositioning Approach for Targeting Oncogenic Transcription Factors

    PubMed Central

    Gayvert, Kaitlyn; Dardenne, Etienne; Cheung, Cynthia; Boland, Mary Regina; Lorberbaum, Tal; Wanjala, Jackline; Chen, Yu; Rubin, Mark; Tatonetti, Nicholas P.; Rickman, David; Elemento, Olivier

    2016-01-01

    Summary Mutations in transcription factors (TFs) genes are frequently observed in tumors, often leading to aberrant transcriptional activity. Unfortunately, TFs are often considered undruggable due to the absence of targetable enzymatic activity. To address this problem, we developed CRAFTT, a Computational drug-Repositioning Approach For Targeting Transcription factor activity. CRAFTT combines ChIP-seq with drug-induced expression profiling to identify small molecules that can specifically perturb TF activity. Application to ENCODE ChIP-seq datasets revealed known drug-TF interactions and a global drug-protein network analysis further supported these predictions. Application of CRAFTT to ERG, a pro-invasive, frequently over-expressed oncogenic TF predicted that dexamethasone would inhibit ERG activity. Indeed, dexamethasone significantly decreased cell invasion and migration in an ERG-dependent manner. Furthermore, analysis of Electronic Medical Record data indicates a protective role for dexamethasone against prostate cancer. Altogether, our method provides a broadly applicable strategy to identify drugs that specifically modulate TF activity. PMID:27264179

  10. An Integrated Soft Computing Approach to Hughes Syndrome Risk Assessment.

    PubMed

    Vilhena, João; Rosário Martins, M; Vicente, Henrique; Grañeda, José M; Caldeira, Filomena; Gusmão, Rodrigo; Neves, João; Neves, José

    2017-03-01

    The AntiPhospholipid Syndrome (APS) is an acquired autoimmune disorder induced by high levels of antiphospholipid antibodies that cause arterial and veins thrombosis, as well as pregnancy-related complications and morbidity, as clinical manifestations. This autoimmune hypercoagulable state, usually known as Hughes syndrome, has severe consequences for the patients, being one of the main causes of thrombotic disorders and death. Therefore, it is required to be preventive; being aware of how probable is to have that kind of syndrome. Despite the updated of antiphospholipid syndrome classification, the diagnosis remains difficult to establish. Additional research on clinically relevant antibodies and standardization of their quantification are required in order to improve the antiphospholipid syndrome risk assessment. Thus, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a computational framework based on Artificial Neural Networks. The proposed model allows for improving the diagnosis, classifying properly the patients that really presented this pathology (sensitivity higher than 85%), as well as classifying the absence of APS (specificity close to 95%).

  11. Computer-aided interpretation approach for optical tomographic images.

    PubMed

    Klose, Christian D; Klose, Alexander D; Netz, Uwe J; Scheel, Alexander K; Beuthan, Jurgen; Hielscher, Andreas H

    2010-01-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  12. Lexical is as lexical does: computational approaches to lexical representation

    PubMed Central

    Woollams, Anna M.

    2015-01-01

    In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204

  13. 3D Reconstruction of Chick Embryo Vascular Geometries Using Non-invasive High-Frequency Ultrasound for Computational Fluid Dynamics Studies.

    PubMed

    Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai

    2015-11-01

    Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.

  14. Developing framework to constrain the geometry of the seismic rupture plane on subduction interfaces a priori - A probabilistic approach

    USGS Publications Warehouse

    Hayes, G.P.; Wald, D.J.

    2009-01-01

    A key step in many earthquake source inversions requires knowledge of the geometry of the fault surface on which the earthquake occurred. Our knowledge of this surface is often uncertain, however, and as a result fault geometry misinterpretation can map into significant error in the final temporal and spatial slip patterns of these inversions. Relying solely on an initial hypocentre and CMT mechanism can be problematic when establishing rupture characteristics needed for rapid tsunami and ground shaking estimates. Here, we attempt to improve the quality of fast finite-fault inversion results by combining several independent and complementary data sets to more accurately constrain the geometry of the seismic rupture plane of subducting slabs. Unlike previous analyses aimed at defining the general form of the plate interface, we require mechanisms and locations of the seismicity considered in our inversions to be consistent with their occurrence on the plate interface, by limiting events to those with well-constrained depths and with CMT solutions indicative of shallow-dip thrust faulting. We construct probability density functions about each location based on formal assumptions of their depth uncertainty and use these constraints to solve for the ‘most-likely’ fault plane. Examples are shown for the trench in the source region of the Mw 8.6 Southern Sumatra earthquake of March 2005, and for the Northern Chile Trench in the source region of the November 2007 Antofagasta earthquake. We also show examples using only the historic catalogues in regions without recent great earthquakes, such as the Japan and Kamchatka Trenches. In most cases, this method produces a fault plane that is more consistent with all of the data available than is the plane implied by the initial hypocentre and CMT mechanism. Using the aggregated data sets, we have developed an algorithm to rapidly determine more accurate initial fault plane geometries for source inversions of future

  15. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  16. Computational approaches to stochastic systems in physics and biology

    NASA Astrophysics Data System (ADS)

    Jeraldo Maldonado, Patricio Rodrigo

    In this dissertation, I devise computational approaches to model and understand two very different systems which exhibit stochastic behavior: quantum fluids with topological defects arising during quenches and forcing, and complex microbial communities living and evolving with the gastrointestinal tracts of vertebrates. As such, this dissertation is organized into two parts. In Part I, I create a model for quantum fluids, which incorporates a conservative and dissipative part, and I also allow the fluid to be externally forced by a normal fluid. I use then this model to calculate scaling laws arising from the stochastic interactions of the topological defects exhibited by the modeled fluid while undergoing a quench. In Chapter 2 I give a detailed description of this model of quantum fluids. Unlike more traditional approaches, this model is based on Cell Dynamical Systems (CDS), an approach that captures relevant physical features of the system and allows for long time steps during its evolution. I devise a two step CDS model, implementing both conservative and dissipative dynamics present in quantum fluids. I also couple the model with an external normal fluid field that drives the system. I then validate the results of the model by measuring different scaling laws predicted for quantum fluids. I also propose an extension of the model that also incorporates the excitations of the fluid and couples its dynamics with the dynamics of the condensate. In Chapter 3 I use the above model to calculate scaling laws predicted for the velocity of topological defects undergoing a critical quench. To accomplish this, I numerically implement an algorithm that extracts from the order parameter field the velocity components of the defects as they move during the quench process. This algorithm is robust and extensible to any system where defects are located by the zeros of the order parameter. The algorithm is also applied to a sheared stripe-forming system, allowing the

  17. An adaptive Cartesian grid generation method for Dirty geometry

    NASA Astrophysics Data System (ADS)

    Wang, Z. J.; Srinivasan, Kumar

    2002-07-01

    Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright

  18. A computational intelligence approach to the Mars Precision Landing problem

    NASA Astrophysics Data System (ADS)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  19. Mutations that Cause Human Disease: A Computational/Experimental Approach

    SciTech Connect

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which can be used to

  20. A Soft Computing Approach to Kidney Diseases Evaluation.

    PubMed

    Neves, José; Martins, M Rosário; Vilhena, João; Neves, João; Gomes, Sabino; Abelha, António; Machado, José; Vicente, Henrique

    2015-10-01

    Kidney renal failure means that one's kidney have unexpectedly stopped functioning, i.e., once chronic disease is exposed, the presence or degree of kidney dysfunction and its progression must be assessed, and the underlying syndrome has to be diagnosed. Although the patient's history and physical examination may denote good practice, some key information has to be obtained from valuation of the glomerular filtration rate, and the analysis of serum biomarkers. Indeed, chronic kidney sickness depicts anomalous kidney function and/or its makeup, i.e., there is evidence that treatment may avoid or delay its progression, either by reducing and prevent the development of some associated complications, namely hypertension, obesity, diabetes mellitus, and cardiovascular complications. Acute kidney injury appears abruptly, with a rapid deterioration of the renal function, but is often reversible if it is recognized early and treated promptly. In both situations, i.e., acute kidney injury and chronic kidney disease, an early intervention can significantly improve the prognosis. The assessment of these pathologies is therefore mandatory, although it is hard to do it with traditional methodologies and existing tools for problem solving. Hence, in this work, we will focus on the development of a hybrid decision support system, in terms of its knowledge representation and reasoning procedures based on Logic Programming, that will allow one to consider incomplete, unknown, and even contradictory information, complemented with an approach to computing centered on Artificial Neural Networks, in order to weigh the Degree-of-Confidence that one has on such a happening. The present study involved 558 patients with an age average of 51.7 years and the chronic kidney disease was observed in 175 cases. The dataset comprise twenty four variables, grouped into five main categories. The proposed model showed a good performance in the diagnosis of chronic kidney disease, since the

  1. A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.

    1991-01-01

    The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.

  2. A new approach to tag design in dolphin telemetry: Computer simulations to minimise deleterious effects

    NASA Astrophysics Data System (ADS)

    Pavlov, V. V.; Wilson, R. P.; Lucke, K.

    2007-02-01

    Remote-sensors and transmitters are powerful devices for studying cetaceans at sea. However, despite substantial progress in microelectronics and miniaturisation of systems, dolphin tags are imperfectly designed; additional drag from tags increases swim costs, compromises swimming capacity and manoeuvrability, and leads to extra loads on the animal's tissue. We propose a new approach to tag design, elaborating basic principles and incorporating design stages to minimise device effects by using computer-aided design. Initially, the operational conditions of the device are defined by quantifying the shape, hydrodynamics and range of the natural deformation of the dolphin body at the tag attachment site (such as close to the dorsal fin). Then, parametric models of both of the dorsal fin and a tag are created using the derived data. The link between parameters of the fin and a tag model allows redesign of tag models according to expected changes of fin geometry (difference in fin shape related with species, sex, and age peculiarities, simulation of the bend of the fin during manoeuvres). A final virtual modelling stage uses iterative improvement of a tag model in a computer fluid dynamics (CFD) environment to enhance tag performance. This new method is considered as a suitable tool of tag design before creation of the physical model of a tag and testing with conventional wind/water tunnel technique. Ultimately, tag materials are selected to conform to the conditions identified by the modelling process and thus help create a physical model of a tag, which should minimise its impact on the animal carrier and thus increase the reliability and quality of the data obtained.

  3. Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.

    PubMed

    Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal

    2016-12-01

    In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for

  4. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  5. COMPUTATIONAL TOXICOLOGY - OBJECTIVE 2: DEVELOPING APPROACHES FOR PRIORITIZING CHEMICALS FOR SUBSEQUENT SCREENING AND TESTING

    EPA Science Inventory

    One of the strategic objectives of the Computational Toxicology Program is to develop approaches for prioritizing chemicals for subsequent screening and testing. Approaches currently available for this process require extensive resources. Therefore, less costly and time-extensi...

  6. Constraining Viewing Geometries of Pulsars with Single-Peaked Gamma-ray Profiles Using a Multiwavelength Approach

    NASA Technical Reports Server (NTRS)

    Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.

    2012-01-01

    Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).

  7. Securing Applications in Personal Computers: The Relay Race Approach

    DTIC Science & Technology

    1991-09-01

    1987. Tanenbaum, A., Structured Computer Organizaticn, Prentice- Hall, Inc., 1990. Zarger, C., "Is Your PC Secure?" Mechanical Engineering, pg. 57...Operating Systems:Design and Implementation, Prentice-Hall, Inc., 1987. Tanenbaum, A., Structured Computer Organization, Prentice- Hall, Inc., 1990. Walker

  8. An HCI Approach to Computing in the Real World

    ERIC Educational Resources Information Center

    Yardi, Sarita; Krolikowski, Pamela; Marshall, Taneshia; Bruckman, Amy

    2008-01-01

    We describe the implementation of a six-week course to teach Human-Computer Interaction (HCI) to high school students. Our goal was to explore the potential of HCI in motivating students to pursue future studies in related computing fields. Participants in our course learned to make connections between the types of technology they use in their…

  9. The Computer Connection: Four Approaches to Microcomputer Laboratory Interfacing.

    ERIC Educational Resources Information Center

    Graef, Jean L.

    1983-01-01

    Four ways in which microcomputers can be turned into laboratory instruments are discussed. These include adding an analog/digital (A/D) converter on a printed circuit board, adding an external A/D converter using the computer's serial port, attaching transducers to the game paddle ports, or connecting an instrument to the computer. (JN)

  10. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.

    PubMed

    Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  11. Common Geometry Module

    SciTech Connect

    Tautges, Timothy J.

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and on top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.

  12. Molecular Geometry.

    ERIC Educational Resources Information Center

    Desseyn, H. O.; And Others

    1985-01-01

    Compares linear-nonlinear and planar-nonplanar geometry through the valence-shell electron pairs repulsion (V.S.E.P.R.), Mulliken-Walsh, and electrostatic force theories. Indicates that although the V.S.E.P.R. theory has more advantages for elementary courses, an explanation of the best features of the different theories offers students a better…

  13. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  14. A Practical and Theoretical Approach to Assessing Computer Attitudes: The Computer Attitudes Measure (CAM).

    ERIC Educational Resources Information Center

    Kay, Robin H.

    1989-01-01

    Describes study conducted at the University of Toronto that assessed the attitudes of student teachers toward computers by using a multicomponent model, the Computer Attitude Measure (CAM). Cognitive, affective, and behavioral attitudes are examined, and correlations of computer literacy, experience, and internal locus of control are discussed.…

  15. Reflections on John Monaghan's "Computer Algebra, Instrumentation, and the Anthropological Approach"

    ERIC Educational Resources Information Center

    Blume, Glen

    2007-01-01

    Reactions to John Monaghan's "Computer Algebra, Instrumentation and the Anthropological Approach" focus on a variety of issues related to the ergonomic approach (instrumentation) and anthropological approach to mathematical activity and practice. These include uses of the term technique; several possibilities for integration of the two approaches;…

  16. Fermion Interactions, Cosmological Constant and Space-Time Dimensionality in a Unified Approach Based on Affine Geometry

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Cirilo-Lombardo, Diego Julio; Dorokhov, Alexander E.

    2014-11-01

    One of the main features of unified models, based on affine geometries, is that all possible interactions and fields naturally arise under the same standard. Here, we consider, from the effective Lagrangian of the theory, the torsion induced 4-fermion interaction. In particular, how this interaction affects the cosmological term, supposing that a condensation occurs for quark fields during the quark-gluon/hadron phase transition in the early universe. We explicitly show that there is no parity-violating pseudo-scalar density, dual to the curvature tensor (Holst term) and the spinor-bilinear scalar density has no mixed couplings of A-V form. On the other hand, the space-time dimensionality cannot be constrained from multidimensional phenomenological models admitting torsion.

  17. NEW APPROACHES: Using a computer to graphically illustrate equipotential lines

    NASA Astrophysics Data System (ADS)

    Phongdara, Boonlua

    1998-09-01

    A simple mathematical model and computer program allow students to plot equipotential lines, for example for two terminals in a tank of water, in a way that is easier and faster but just as accurate as the traditional method.

  18. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    NASA Astrophysics Data System (ADS)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  19. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  20. Computational challenges of structure-based approaches applied to HIV.

    PubMed

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  1. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  2. New Approaches to Quantum Computing using Nuclear Magnetic Resonance Spectroscopy

    SciTech Connect

    Colvin, M; Krishnan, V V

    2003-02-07

    The power of a quantum computer (QC) relies on the fundamental concept of the superposition in quantum mechanics and thus allowing an inherent large-scale parallelization of computation. In a QC, binary information embodied in a quantum system, such as spin degrees of freedom of a spin-1/2 particle forms the qubits (quantum mechanical bits), over which appropriate logical gates perform the computation. In classical computers, the basic unit of information is the bit, which can take a value of either 0 or 1. Bits are connected together by logic gates to form logic circuits to implement complex logical operations. The expansion of modern computers has been driven by the developments of faster, smaller and cheaper logic gates. As the size of the logic gates become smaller toward the level of atomic dimensions, the performance of such a system is no longer considered classical but is rather governed by quantum mechanics. Quantum computers offer the potentially superior prospect of solving computational problems that are intractable to classical computers such as efficient database searches and cryptography. A variety of algorithms have been developed recently, most notably Shor's algorithm for factorizing long numbers into prime factors in polynomial time and Grover's quantum search algorithm. The algorithms that were of only theoretical interest as recently, until several methods were proposed to build an experimental QC. These methods include, trapped ions, cavity-QED, coupled quantum dots, Josephson junctions, spin resonance transistors, linear optics and nuclear magnetic resonance. Nuclear magnetic resonance (NMR) is uniquely capable of constructing small QCs and several algorithms have been implemented successfully. NMR-QC differs from other implementations in one important way that it is not a single QC, but a statistical ensemble of them. Thus, quantum computing based on NMR is considered as ensemble quantum computing. In NMR quantum computing, the spins with

  3. Target Detection Using Fractal Geometry

    NASA Technical Reports Server (NTRS)

    Fuller, J. Joseph

    1991-01-01

    The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.

  4. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  5. Solving the molecular distance geometry problem with inaccurate distance data

    PubMed Central

    2013-01-01

    We present a new iterative algorithm for the molecular distance geometry problem with inaccurate and sparse data, which is based on the solution of linear systems, maximum cliques, and a minimization of nonlinear least-squares function. Computational results with real protein structures are presented in order to validate our approach. PMID:23901894

  6. Solving the molecular distance geometry problem with inaccurate distance data.

    PubMed

    Souza, Michael; Lavor, Carlile; Muritiba, Albert; Maculan, Nelson

    2013-01-01

    We present a new iterative algorithm for the molecular distance geometry problem with inaccurate and sparse data, which is based on the solution of linear systems, maximum cliques, and a minimization of nonlinear least-squares function. Computational results with real protein structures are presented in order to validate our approach.

  7. Unit cell geometry of 3-D braided structures

    NASA Technical Reports Server (NTRS)

    Du, Guang-Wu; Ko, Frank K.

    1993-01-01

    The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.

  8. A Computational Approach to Qualitative Analysis in Large Textual Datasets

    PubMed Central

    Evans, Michael S.

    2014-01-01

    In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398

  9. A user`s guide for BREAKUP: A computer code for parallelizing the overset grid approach

    SciTech Connect

    Barnette, D.W.

    1998-04-01

    In this user`s guide, details for running BREAKUP are discussed. BREAKUP allows the widely used overset grid method to be run in a parallel computer environment to achieve faster run times for computational field simulations over complex geometries. The overset grid method permits complex geometries to be divided into separate components. Each component is then gridded independently. The grids are computationally rejoined in a solver via interpolation coefficients used for grid-to-grid communications of boundary data. Overset grids have been in widespread use for many years on serial computers, and several well-known Navier-Stokes flow solvers have been extensively developed and validated to support their use. One drawback of serial overset grid methods has been the extensive compute time required to update flow solutions one grid at a time. Parallelizing the overset grid method overcomes this limitation by updating each grid or subgrid simultaneously. BREAKUP prepares overset grids for parallel processing by subdividing each overset grid into statically load-balanced subgrids. Two-dimensional examples with sample solutions, and three-dimensional examples, are presented.

  10. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences.

    PubMed

    Goecks, Jeremy; Nekrutenko, Anton; Taylor, James

    2010-01-01

    Increased reliance on computational approaches in the life sciences has revealed grave concerns about how accessible and reproducible computation-reliant results truly are. Galaxy http://usegalaxy.org, an open web-based platform for genomic research, addresses these problems. Galaxy automatically tracks and manages data provenance and provides support for capturing the context and intent of computational methods. Galaxy Pages are interactive, web-based documents that provide users with a medium to communicate a complete computational analysis.

  11. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be

  12. Control of fault plane geometry on the formation of a normal fault-related anticline: an experimental approach.

    PubMed

    Long, Wei; Li, Zhongquan; Li, Ying; Chen, Junliang; Li, Hongkui; Wan, Shuangshuang

    2017-12-01

    In one of the largest oil-gas fields in Daqing, China, the anticlines are important structures that hold natural gas. The origin of the symmetric anticlines, which have bends on both the limbs, remains under debate. This is especially true in the case of the anticline in Xujiaweizi (XJWZ), which has recently been the focus of gas exploration. A compressive force introduced by a ramp/flat fault was suggested as its origin of formation; however, this is inconsistent with the reconstruction of the regional stress fields, which show an extensive environment. An alternative explanation suggests a normal fault-related fold under extensive stress. However, this mechanism has difficulty explaining the very localized, rather than wide-spread, development of the anticline along the proposed controlling normal fault. The well-developed bends on both limbs of the anticline are also very different from the typical roll-over anticline. Here, we conduct an experimental study showing that the very localized development of the bent-on-both-limbs anticline is controlled by the geometry of the underlying fault-plane. A ramp/flat fault plane can introduce an anticline with bends on both limbs, while a smooth fault plane will develop a roll-over anticline with a bend on only one limb.

  13. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis

    PubMed Central

    Fokas, Alexander S.; Cole, Daniel J.; Ahnert, Sebastian E.; Chin, Alex W.

    2016-01-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function. PMID:27623708

  14. Residue Geometry Networks: A Rigidity-Based Approach to the Amino Acid Network and Evolutionary Rate Analysis

    NASA Astrophysics Data System (ADS)

    Fokas, Alexander S.; Cole, Daniel J.; Ahnert, Sebastian E.; Chin, Alex W.

    2016-09-01

    Amino acid networks (AANs) abstract the protein structure by recording the amino acid contacts and can provide insight into protein function. Herein, we describe a novel AAN construction technique that employs the rigidity analysis tool, FIRST, to build the AAN, which we refer to as the residue geometry network (RGN). We show that this new construction can be combined with network theory methods to include the effects of allowed conformal motions and local chemical environments. Importantly, this is done without costly molecular dynamics simulations required by other AAN-related methods, which allows us to analyse large proteins and/or data sets. We have calculated the centrality of the residues belonging to 795 proteins. The results display a strong, negative correlation between residue centrality and the evolutionary rate. Furthermore, among residues with high closeness, those with low degree were particularly strongly conserved. Random walk simulations using the RGN were also successful in identifying allosteric residues in proteins involved in GPCR signalling. The dynamic function of these residues largely remain hidden in the traditional distance-cutoff construction technique. Despite being constructed from only the crystal structure, the results in this paper suggests that the RGN can identify residues that fulfil a dynamical function.

  15. A Computer-based Approach to Analyzing a Patient Pool.

    ERIC Educational Resources Information Center

    Wooten, Ruth K.

    1981-01-01

    A computer system for analyzing a patient pool used by the division of dental hygiene at Virginia Commonwealth University is presented as a model for evaluating patient profiles. Descriptions of the various patient parameters are provided, and guidelines are presented to assist institutions considering such a system. (Author/MLW)

  16. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  17. Computational Conceptual Change: An Explanation-Based Approach

    DTIC Science & Technology

    2012-06-01

    of human mental models, (2) our psychologically plausible model of analogical generalization can learn these models from examples, and (3) conceptual...four simulations. We simulate conceptual change in the domains of astronomy, biology, and force dynamics where examples of psychological conceptual...provide a consistent computational account of human mental models, (2) our psychologically plausible model of analogical generalization can learn these

  18. One Instructor's Approach to Computer Assisted Instruction in General Chemistry.

    ERIC Educational Resources Information Center

    DeLorenzo, Ronald

    1982-01-01

    Discusses advantages of using computer-assisted instruction in a college general chemistry course. Advantages include using programs which generate random equations with double arrows (equilibrium systems) or generate alkane structural formula, asking for the correct IUPAC name of the structure. (Author/JN)

  19. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  20. Linguistics, Computers, and the Language Teacher. A Communicative Approach.

    ERIC Educational Resources Information Center

    Underwood, John H.

    This analysis of the state of the art of computer programs and programming for language teaching has two parts. In the first part, an overview of the theory and practice of language teaching, Noam Chomsky's view of language, and the implications and problems of generative theory are presented. The theory behind the input model of language…

  1. Computational Modelling and Simulation Fostering New Approaches in Learning Probability

    ERIC Educational Resources Information Center

    Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid

    2006-01-01

    Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…

  2. A "Service-Learning Approach" to Teaching Computer Graphics

    ERIC Educational Resources Information Center

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  3. A New Approach: Computer-Assisted Problem-Solving Systems

    ERIC Educational Resources Information Center

    Gok, Tolga

    2010-01-01

    Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…

  4. A Practical Computational Approach to Study Molecular Instability Using the Pseudo-Jahn-Teller Effect.

    PubMed

    García-Fernández, Pablo; Aramburu, Jose Antonio; Moreno, Miguel; Zlatar, Matija; Gruden-Pavlović, Maja

    2014-04-08

    Vibronic coupling theory shows that the cause for spontaneous instability in systems presenting a nondegenerate ground state is the so-called pseudo-Jahn-Teller effect, and thus its study can be extremely helpful to understand the structure of many molecules. While this theory, based on the mixing of the ground and excited states with a distortion, has been long studied, there are two obscure points that we try to clarify in the present work. First, the operators involved in both the vibronic and nonvibronic parts of the force constant take only into account electron-nuclear and nuclear-nuclear interactions, apparently leaving electron-electron repulsions and the electron's kinetic energy out of the chemical picture. Second, a fully quantitative computational appraisal of this effect has been up to now problematic. Here, we present a reformulation of the pseudo-Jahn-Teller theory that explicitly shows the contributions of all operators in the molecular Hamiltonian and allows connecting the results obtained with this model to other chemical theories relating electron distribution and geometry. Moreover, we develop a practical approach based on Hartree-Fock and density functional theory that allows quantification of the pseudo-Jahn-Teller effect. We demonstrate the usefulness of our method studying the pyramidal distortion in ammonia and its absence in borane, revealing the strong importance of the kinetic energy of the electrons in the lowest a2″ orbital to trigger this instability. The present tool opens a window for exploring in detail the actual microscopic origin of structural instabilities in molecules and solids.

  5. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    SciTech Connect

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  6. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  7. Computational approach to quantum encoder design for purity optimization

    SciTech Connect

    Yamamoto, Naoki; Fazel, Maryam

    2007-07-15

    In this paper, we address the problem of designing a quantum encoder that maximizes the minimum output purity of a given decohering channel, where the minimum is taken over all possible pure inputs. This problem is cast as a max-min optimization problem with a rank constraint on an appropriately defined matrix variable. The problem is computationally very hard because it is nonconvex with respect to both the objective function (output purity) and the rank constraint. Despite this difficulty, we provide a tractable computational algorithm that produces the exact optimal solution for codespace of dimension 2. Moreover, this algorithm is easily extended to cover the general class of codespaces, in which case the solution is suboptimal in the sense that the suboptimized output purity serves as a lower bound of the exact optimal purity. The algorithm consists of a sequence of semidefinite programmings and can be performed easily. Two typical quantum error channels are investigated to illustrate the effectiveness of our method.

  8. Probing Cosmic Infrared Sources: A Computer Modeling Approach

    DTIC Science & Technology

    1992-06-01

    Bits and Bytes.", NASA Goddard Space Right Center, Green Belt , Maryland (May 1990). 11. "Probing Infrared Sources by Computer Modeling.", review talk...is this expected from elementary theory, but many observations can be well described with this assumption ( Kuiper 1967; Kwok 1980; Sopka et al. 1985...Neugebauer, H. J. Habing, P. E. Clegg, & T. J. Chester (Washington: GPO) Kuiper , T. B. H. et al. 1976, Api, 204, 408 Kwan, J., Scoville, N. 1976, Api, 209, 102

  9. A User Modelling Approach for Computer-Based Critiquing

    DTIC Science & Technology

    1990-01-01

    9.1.2 Explicit Acquisition Methods ................. 176 9.1.3 Tutoring-based Methods ..................... 177 9.1.4 Statistical Analysis of User’s...accomplish the second process are the subject of this research . Cooperative problem solving systems assume that the third process is inherent in the...process of human-computer interaction. The second class of models above are psychological models developed by and for the analysis of human behavior

  10. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  11. Computational Approaches to Viral Evolution and Rational Vaccine Design

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  12. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    NASA Astrophysics Data System (ADS)

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    2016-10-01

    We present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to the proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. While these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.

  13. Computational Approaches to RNA Structure Prediction, Analysis and Design

    PubMed Central

    Laing, Christian; Schlick, Tamar

    2011-01-01

    RNA molecules are important cellular components involved in many fundamental biological processes. Understanding the mechanisms behind their functions requires RNA tertiary structure knowledge. While modeling approaches for the study of RNA structures and dynamics lag behind efforts in protein folding, much progress has been achieved in the past two years. Here, we review recent advances in RNA folding algorithms, RNA tertiary motif discovery, applications of graph theory approaches to RNA structure and function, and in silico generation of RNA sequence pools for aptamer design. Advances within each area can be combined to impact many problems in RNA structure and function. PMID:21514143

  14. Computational approach for calculating bound states in quantum field theory

    NASA Astrophysics Data System (ADS)

    Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.

    2016-09-01

    We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.

  15. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    NASA Astrophysics Data System (ADS)

    Sathish, Kuppani; RamaMohan Reddy, A.

    2016-06-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  16. Workflow Scheduling in Grid Computing Environment using a Hybrid GAACO Approach

    NASA Astrophysics Data System (ADS)

    Sathish, Kuppani; RamaMohan Reddy, A.

    2017-02-01

    In recent trends, grid computing is one of the emerging areas in computing platform which supports parallel and distributed environments. The main problem for grid computing is scheduling of workflows in terms of user specifications is a stimulating task and it also impacts the performance. This paper proposes a hybrid GAACO approach, which is a combination of Genetic Algorithm and Ant Colony Optimization Algorithm. The GAACO approach proposes different types of scheduling heuristics for the grid environment. The main objective of this approach is to satisfy all the defined constraints and user parameters.

  17. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  18. Asynchronous event-based hebbian epipolar geometry.

    PubMed

    Benosman, Ryad; Ieng, Sio-Hoï; Rogister, Paul; Posch, Christoph

    2011-11-01

    Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based--rather than frame-based--vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.

  19. A contact detection algorithm for deformable tetrahedral geometries based on a novel approach for general simplices used in the discrete element method

    NASA Astrophysics Data System (ADS)

    Stühler, Sven; Fleissner, Florian; Eberhard, Peter

    2016-11-01

    We present an extended particle model for the discrete element method that on the one hand is tetrahedral in shape and on the other hand is capable to describe deformations. The deformations of the tetrahedral particles require a framework to interrelate the particle strains and resulting stresses. Hence, adaptations from the finite element method were used. This allows to link the two methods and to adequately describe material and simulation parameters separately in each scope. Due to the complexity arising of the non-spherical tetrahedral geometry, all possible contact combinations of vertices, edges, and surfaces must be considered by the used contact detection algorithm. The deformations of the particles make the contact evaluation even more challenging. Therefore, a robust contact detection algorithm based on an optimization approach that exploits temporal coherence is presented. This algorithm is suitable for general {R}^{{n}} simplices. An evaluation of the robustness of this algorithm is performed using a numerical example. In order to create complex geometries, bonds between these deformable particles are introduced. This coupling via the tetrahedra faces allows the simulation bonding of deformable bodies composed of several particles. Numerical examples are presented and validated with results that are obtained by the same simulation setup modeled with the finite element method. The intention of using these bonds is to be able to model fracture and material failure. Therefore, the bonds between the particles are not lasting and feature a release mechanism based on a predefined criterion.

  20. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    NASA Astrophysics Data System (ADS)

    Johansen, Stein E.

    2014-12-01

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci "atoms" and "molecules" consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a "positive approach". In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  1. Positive approach: Implications for the relation between number theory and geometry, including connection to Santilli mathematics, from Fibonacci reconstitution of natural numbers and of prime numbers

    SciTech Connect

    Johansen, Stein E.

    2014-12-10

    The paper recapitulates some key elements in previously published results concerning exact and complete reconstitution of the field of natural numbers, both as ordinal and as cardinal numbers, from systematic unfoldment of the Fibonacci algorithm. By this natural numbers emerge as Fibonacci 'atoms' and 'molecules' consistent with the notion of Zeckendorf sums. Here, the sub-set of prime numbers appears not as the primary numbers, but as an epistructure from a deeper Fibonacci constitution, and is thus targeted from a 'positive approach'. In the Fibonacci reconstitution of number theory natural numbers show a double geometrical aspect: partly as extension in space and partly as position in a successive structuring of space. More specifically, the natural numbers are shown to be distributed by a concise 5:3 code structured from the Fibonacci algorithm via Pascal's triangle. The paper discusses possible implications for the more general relation between number theory and geometry, as well as more specifically in relation to hadronic mathematics, initiated by R.M. Santilli, and also briefly to some other recent science linking number theory more directly to geometry and natural systems.

  2. Freezing in confined geometries

    NASA Technical Reports Server (NTRS)

    Sokol, P. E.; Ma, W. J.; Herwig, K. W.; Snow, W. M.; Wang, Y.; Koplik, Joel; Banavar, Jayanth R.

    1992-01-01

    Results of detailed structural studies, using elastic neutron scattering, of the freezing of liquid O2 and D2 in porous vycor glass, are presented. The experimental studies have been complemented by computer simulations of the dynamics of freezing of a Lennard-Jones liquid in narrow channels bounded by molecular walls. Results point to a new simple physical interpretation of freezing in confined geometries.

  3. Use of CAD Geometry in MDO

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1996-01-01

    The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.

  4. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  5. A complex systems approach to computational molecular biology

    SciTech Connect

    Lapedes, A. |

    1993-09-01

    We report on the containing research program at Santa Fe Institute that applies complex systems methodology to computational molecular biology. Two aspects are stressed here are the use of co-evolving adaptive neutral networks for determining predictable protein structure classifications, and the use of information theory to elucidate protein structure and function. A ``snapshot`` of the current state of research in these two topics is presented, representing the present state of two major research thrusts in the program of Genetic Data and Sequence Analysis at the Santa Fe Institute.

  6. NEW APPROACHES: Physics with a car headlamp and a computer

    NASA Astrophysics Data System (ADS)

    Cooper, Ian

    1997-05-01

    An experiment suitable for high school students or undergraduates uses a car headlamp. With the use of a spreadsheet program, extensive data manipulation becomes a simple task, enabling students to answer relevant questions as opposed to just verifying a well-known law. By using the computer in this way, students can be given an awareness of many mathematical techniques used in data analysis not otherwise possible because of time constraints or students' lack of knowledge. Templates of varying degrees of complexity can be used to cater for different groups of students.

  7. Computational Approaches to Enhance Nanosafety and Advance Nanomedicine

    NASA Astrophysics Data System (ADS)

    Mendoza, Eduardo R.

    With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

  8. Ultrasonic ray models for complex geometries

    NASA Astrophysics Data System (ADS)

    Schumm, A.

    2000-05-01

    Computer Aided Design techniques have become an inherent part of many industrial applications and are also gaining popularity in Nondestructive Testing. In sound field calculations, CAD representations can contribute to one of the generic problem in ultrasonic modeling, the wave propagation in complex geometries. Ray tracing codes were the first to take account of the geometry, providing qualitative information on beam propagation, such as geometrical echoes, multiple sound paths and possible conversions between wave modes. The forward ray tracing approach is intuitive and straightforward and can evolve towards a more quantitative code if transmission, divergence and polarization information is added. If used to evaluate the impulse response of a given geometry, an approximated time-dependent received signal can be obtained after convolution with the excitation signal. The more accurate reconstruction of a sound field after interaction with a geometrical interface according to ray theory requires inverse (or Fermat) ray-tracing to obtain the contribution of each elementary point source to the field at a given observation point. The resulting field of a finite transducer can then be obtained after integration over all point sources. While conceptionally close to classical ray tracing, this approach puts more stringent requirements on the CAD representation employed and is more difficult to extend towards multiple interfaces. In this communication we present examples for both approaches. In a prospective step, the link between both ray techniques is shown, and we illustrate how a combination of both approaches contributes to the solution of an industrial problem.

  9. Cognitive control in majority search: a computational modeling approach.

    PubMed

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function.

  10. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  11. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  12. Computational approaches for inferring the functions of intrinsically disordered proteins

    PubMed Central

    Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter

    2015-01-01

    Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226

  13. Cognitive Control in Majority Search: A Computational Modeling Approach

    PubMed Central

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function. PMID:21369357

  14. Computational Geometry and Computer-Aided Design

    NASA Technical Reports Server (NTRS)

    Fay, T. H. (Compiler); Shoosmith, J. N. (Compiler)

    1985-01-01

    Extended abstracts of papers addressing the analysis, representation, and synthesis of shape information are presented. Curves and shape control, grid generation and contouring, solid modelling, surfaces, and curve intersection are specifically addressed.

  15. Algebraic geometry approach in gravity theory and new relations between the parameters in type I low-energy string theory action in theories with extra dimensions

    NASA Astrophysics Data System (ADS)

    Dimitrov, B. G.

    2010-02-01

    On the base of the distinction between covariant and contravariant metric tensor components, a new (multivariable) cubic algebraic equation for reparametrization invariance of the gravitational Lagrangian has been derived and parametrized with complicated non - elliptic functions, depending on the (elliptic) Weierstrass function and its derivative. This is different from standard algebraic geometry, where only two-dimensional cubic equations are parametrized with elliptic functions and not multivariable ones. Physical applications of the approach have been considered in reference to theories with extra dimensions. The s.c. "length function" l(x) has been introduced and found as a solution of quasilinear differential equations in partial derivatives for two different cases of "compactification + rescaling" and "rescaling + compactification". New physically important relations (inequalities) between the parameters in the action are established, which cannot be derived in the case $l=1$ of the standard gravitational theory, but should be fulfilled also for that case.

  16. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    PubMed

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  17. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    EPA Science Inventory

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  18. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach

    PubMed Central

    Eslami-Mossallam, Behrouz; Schram, Raoul D.; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  19. Computational biology approach to uncover hepatitis C virus helicase operation

    PubMed Central

    Flechsig, Holger

    2014-01-01

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings. PMID:24707123

  20. A computational approach to the twin paradox in curved spacetime

    NASA Astrophysics Data System (ADS)

    Fung, Kenneth K. H.; Clark, Hamish A.; Lewis, Geraint F.; Wu, Xiaofeng

    2016-09-01

    Despite being a major component in the teaching of special relativity, the twin ‘paradox’ is generally not examined in courses on general relativity. Due to the complexity of analytical solutions to the problem, the paradox is often neglected entirely, and students are left with an incomplete understanding of the relativistic behaviour of time. This article outlines a project, undertaken by undergraduate physics students at the University of Sydney, in which a novel computational method was derived in order to predict the time experienced by a twin following a number of paths between two given spacetime coordinates. By utilising this method, it is possible to make clear to students that following a geodesic in curved spacetime does not always result in the greatest experienced proper time.

  1. Economic growth rate management by soft computing approach

    NASA Astrophysics Data System (ADS)

    Maksimović, Goran; Jović, Srđan; Jovanović, Radomir

    2017-01-01

    Economic growth rate management is very important process in order to improve the economic stability of any country. The main goal of the study was to manage the impact of agriculture, manufacturing, industry and services on the economic growth rate prediction. Soft computing methodology was used in order to select the inputs influence on the economic growth rate prediction. It is known that the economic growth may be developed on the basis of combination of different factors. Gross domestic product (GDP) was used as economic growth indicator. It was found services have the highest impact on the GDP growth rate. On the contrary, the manufacturing has the smallest impact on the GDP growth rate.

  2. Computational biology approach to uncover hepatitis C virus helicase operation.

    PubMed

    Flechsig, Holger

    2014-04-07

    Hepatitis C virus (HCV) helicase is a molecular motor that splits nucleic acid duplex structures during viral replication, therefore representing a promising target for antiviral treatment. Hence, a detailed understanding of the mechanism by which it operates would facilitate the development of efficient drug-assisted therapies aiming to inhibit helicase activity. Despite extensive investigations performed in the past, a thorough understanding of the activity of this important protein was lacking since the underlying internal conformational motions could not be resolved. Here we review investigations that have been previously performed by us for HCV helicase. Using methods of structure-based computational modelling it became possible to follow entire operation cycles of this motor protein in structurally resolved simulations and uncover the mechanism by which it moves along the nucleic acid and accomplishes strand separation. We also discuss observations from that study in the light of recent experimental studies that confirm our findings.

  3. An algebraic approach to the study of weakly excited states for a condensate in a ring geometry

    NASA Astrophysics Data System (ADS)

    Buonsante, P.; Franco, R.; Penna, V.

    2005-09-01

    We determine the low-energy spectrum and the eigenstates for a two-bosonic mode nonlinear model by applying the Inönü-Wigner contraction method to the Hamiltonian algebra. This model is known to well represent a Bose-Einstein condensate rotating in a thin torus endowed with two angular-momentum modes as well as a condensate in a double-well potential characterized by two space modes. We consider such a model in the presence of both an attractive and a repulsive boson interaction and investigate regimes corresponding to different values of the inter-mode tunnelling parameter. We show that the results ensuing from our approach are in many cases extremely satisfactory. To this end, we compare our results with the ground state obtained both numerically and within a standard semiclassical approximation based on su(2) coherent states.

  4. Perturbation approach for nuclear magnetic resonance solid-state quantum computation

    DOE PAGES

    Berman, G. P.; Kamenev, D. I.; Tsifrinovich, V. I.

    2003-01-01

    A dynmore » amics of a nuclear-spin quantum computer with a large number ( L = 1000 ) of qubits is considered using a perturbation approach. Small parameters are introduced and used to compute the error in an implementation of an entanglement between remote qubits, using a sequence of radio-frequency pulses. The error is computed up to the different orders of the perturbation theory and tested using exact numerical solution.« less

  5. Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

    PubMed Central

    Herd, Seth A.; Krueger, Kai A.; Kriete, Trenton E.; Huang, Tsung-Ren; Hazy, Thomas E.; O'Reilly, Randall C.

    2013-01-01

    We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area. PMID:23935605

  6. A New Approach on Computing Free Core Nutation

    NASA Astrophysics Data System (ADS)

    Zhang, Mian; Huang, Chengling

    2015-04-01

    Free core nutation (FCN) is a rotational modes of the earth related to non-alignment of the rotation axis of the core and of the mantle. FCN period by traditional theoretical methods is near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new stratified Galerkin method is proposed and applied to the computation of rotational modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.

  7. Non-racemic mixture model: a computational approach.

    PubMed

    Polanco, Carlos; Buhse, Thomas

    2017-01-01

    The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.

  8. Localized tissue mineralization regulated by bone remodelling: A computational approach

    PubMed Central

    Decco, Oscar; Adams, George; Cook, Richard B.; García Aznar, José Manuel

    2017-01-01

    Bone is a living tissue whose main mechanical function is to provide stiffness, strength and protection to the body. Both stiffness and strength depend on the mineralization of the organic matrix, which is constantly being remodelled by the coordinated action of the bone multicellular units (BMUs). Due to the dynamics of both remodelling and mineralization, each sample of bone is composed of structural units (osteons in cortical and packets in cancellous bone) created at different times, therefore presenting different levels of mineral content. In this work, a computational model is used to understand the feedback between the remodelling and the mineralization processes under different load conditions and bone porosities. This model considers that osteoclasts primarily resorb those parts of bone closer to the surface, which are younger and less mineralized than older inner ones. Under equilibrium loads, results show that bone volumes with both the highest and the lowest levels of porosity (cancellous and cortical respectively) tend to develop higher levels of mineral content compared to volumes with intermediate porosity, thus presenting higher material densities. In good agreement with recent experimental measurements, a boomerang-like pattern emerges when plotting apparent density at the tissue level versus material density at the bone material level. Overload and disuse states are studied too, resulting in a translation of the apparent–material density curve. Numerical results are discussed pointing to potential clinical applications. PMID:28306746

  9. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  10. Localized tissue mineralization regulated by bone remodelling: A computational approach.

    PubMed

    Berli, Marcelo; Borau, Carlos; Decco, Oscar; Adams, George; Cook, Richard B; García Aznar, José Manuel; Zioupos, Peter

    2017-01-01

    Bone is a living tissue whose main mechanical function is to provide stiffness, strength and protection to the body. Both stiffness and strength depend on the mineralization of the organic matrix, which is constantly being remodelled by the coordinated action of the bone multicellular units (BMUs). Due to the dynamics of both remodelling and mineralization, each sample of bone is composed of structural units (osteons in cortical and packets in cancellous bone) created at different times, therefore presenting different levels of mineral content. In this work, a computational model is used to understand the feedback between the remodelling and the mineralization processes under different load conditions and bone porosities. This model considers that osteoclasts primarily resorb those parts of bone closer to the surface, which are younger and less mineralized than older inner ones. Under equilibrium loads, results show that bone volumes with both the highest and the lowest levels of porosity (cancellous and cortical respectively) tend to develop higher levels of mineral content compared to volumes with intermediate porosity, thus presenting higher material densities. In good agreement with recent experimental measurements, a boomerang-like pattern emerges when plotting apparent density at the tissue level versus material density at the bone material level. Overload and disuse states are studied too, resulting in a translation of the apparent-material density curve. Numerical results are discussed pointing to potential clinical applications.

  11. TOPICAL REVIEW: Computational approaches to 3D modeling of RNA

    NASA Astrophysics Data System (ADS)

    Laing, Christian; Schlick, Tamar

    2010-07-01

    Many exciting discoveries have recently revealed the versatility of RNA and its importance in a variety of functions within the cell. Since the structural features of RNA are of major importance to their biological function, there is much interest in predicting RNA structure, either in free form or in interaction with various ligands, including proteins, metabolites and other molecules. In recent years, an increasing number of researchers have developed novel RNA algorithms for predicting RNA secondary and tertiary structures. In this review, we describe current experimental and computational advances and discuss recent ideas that are transforming the traditional view of RNA folding. To evaluate the performance of the most recent RNA 3D folding algorithms, we provide a comparative study in order to test the performance of available 3D structure prediction algorithms for an RNA data set of 43 structures of various lengths and motifs. We find that the algorithms vary widely in terms of prediction quality across different RNA lengths and topologies; most predictions have very large root mean square deviations from the experimental structure. We conclude by outlining some suggestions for future RNA folding research.

  12. A computational approach to studying ageing at the individual level

    PubMed Central

    Mourão, Márcio A.; Schnell, Santiago; Pletcher, Scott D.

    2016-01-01

    The ageing process is actively regulated throughout an organism's life, but studying the rate of ageing in individuals is difficult with conventional methods. Consequently, ageing studies typically make biological inference based on population mortality rates, which often do not accurately reflect the probabilities of death at the individual level. To study the relationship between individual and population mortality rates, we integrated in vivo switch experiments with in silico stochastic simulations to elucidate how carefully designed experiments allow key aspects of individual ageing to be deduced from group mortality measurements. As our case study, we used the recent report demonstrating that pheromones of the opposite sex decrease lifespan in Drosophila melanogaster by reversibly increasing population mortality rates. We showed that the population mortality reversal following pheromone removal was almost surely occurring in individuals, albeit more slowly than suggested by population measures. Furthermore, heterogeneity among individuals due to the inherent stochasticity of behavioural interactions skewed population mortality rates in middle-age away from the individual-level trajectories of which they are comprised. This article exemplifies how computational models function as important predictive tools for designing wet-laboratory experiments to use population mortality rates to understand how genetic and environmental manipulations affect ageing in the individual. PMID:26865300

  13. Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.

    2000-01-01

    Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.

  14. Generative CAI in Analytical Geometry.

    ERIC Educational Resources Information Center

    Uttal, William R.; And Others

    A generative computer-assisted instruction system is being developed to tutor students in analytical geometry. The basis of this development is the thesis that a generative teaching system can be developed by establishing and then stimulating a simplified, explicit model of the human tutor. The goal attempted is that of a computer environment…

  15. Computer Mediated Social Network Approach to Software Support and Maintenance

    DTIC Science & Technology

    2010-06-01

    mathematics (Euler, 1741;  Sachs, Stiebitz, & Wilson, 1988), philosophy ( Durkheim , 2001), the social science domain (Granovetter  1973; 1983; Milgram, 1967...to philosophy  ( Durkheim , 2001), to the strength of the connections a (Granovetter 1973; Granovetter, 1983) and the  number of connections (Milgram...Qualitative, quantitative, and mixed method approaches  (Second ed.) Sage Publications Inc.   Durkheim , É. (2001). The elementary forms of religious life, New

  16. A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae

    PubMed Central

    Chu, Daniel B.; Burgess, Sean M.

    2016-01-01

    Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203

  17. Enrichment Activities for Geometry.

    ERIC Educational Resources Information Center

    Usiskin, Zalman

    1983-01-01

    Enrichment activities that teach about geometry as they instruct in geometry are given for some significant topics. The facets of geometry included are tessellations, round robin tournaments, geometric theorems on triangles, and connections between geometry and complex numbers. (MNS)

  18. An integrative computational approach for prioritization of genomic variants

    SciTech Connect

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; Meydan, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad; Huang, Qingyang

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.

  19. An integrative computational approach for prioritization of genomic variants

    DOE PAGES

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; ...

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidatemore » genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.« less

  20. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  1. An Integrative Computational Approach for Prioritization of Genomic Variants

    PubMed Central

    Wang, Sheng; Meyden, Cem; Sulakhe, Dinanath; Poliakov, Alexander; Börnigen, Daniela; Xie, Bingqing; Taylor, Andrew; Ma, Jianzhu; Paciorkowski, Alex R.; Mirzaa, Ghayda M.; Dave, Paul; Agam, Gady; Xu, Jinbo; Al-Gazali, Lihadh; Mason, Christopher E.; Ross, M. Elizabeth; Maltsev, Natalia; Gilliam, T. Conrad

    2014-01-01

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidate genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. The study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest. PMID:25506935

  2. Effects of artificial gravity on the cardiovascular system: Computational approach

    NASA Astrophysics Data System (ADS)

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    steady-state cardiovascular behavior during sustained artificial gravity and exercise. Further validation of the model was performed using experimental data from the combined exercise and artificial gravity experiments conducted on the MIT CRC, and these results will be presented separately in future publications. This unique computational framework can be used to simulate a variety of centrifuge configuration and exercise intensities to improve understanding and inform decisions about future implementation of artificial gravity in space.

  3. A new computer approach to mixed feature classification for forestry application

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1976-01-01

    A computer approach for mapping mixed forest features (i.e., types, classes) from computer classification maps is discussed. Mixed features such as mixed softwood/hardwood stands are treated as admixtures of softwood and hardwood areas. Large-area mixed features are identified and small-area features neglected when the nominal size of a mixed feature can be specified. The computer program merges small isolated areas into surrounding areas by the iterative manipulation of the postprocessing algorithm that eliminates small connected sets. For a forestry application, computer-classified LANDSAT multispectral scanner data of the Sam Houston National Forest were used to demonstrate the proposed approach. The technique was successful in cleaning the salt-and-pepper appearance of multiclass classification maps and in mapping admixtures of softwood areas and hardwood areas. However, the computer-mapped mixed areas matched very poorly with the ground truth because of inadequate resolution and inappropriate definition of mixed features.

  4. A scalable approach to modeling groundwater flow on massively parallel computers

    SciTech Connect

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer`s time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model.

  5. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    SciTech Connect

    Filippi, Anthony M; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  6. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    SciTech Connect

    Fillippi, Anthony; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  7. Differential spectral attenuation measurements at microwaves in a LEO-LEO satellites radio occultation geometry: a novel approach for limiting scintillation effects in tropospheric water vapor measurements

    NASA Astrophysics Data System (ADS)

    Facheris, Luca; Martini, Enrica; Cuccoli, Fabrizio; Argenti, Fabrizio

    2004-12-01

    The DSA (Differential Spectral Attenuation) approach, presented in a companion paper in this conference's proceedings, has the potential to provide the total content of water vapor (IWV, Integrated Water Vapor) along the propagation path between two Low Earth Orbiting (LEO) satellites. The interest towards the DSA, based on the ratio of simultaneous measurements of the total attenuation at two relatively close frequencies in the K-Ku bands, was moved by the need for limiting the effects of tropopheric scintillation and by the fact that DSA measurements are highly correlated to the IWV along the LEO-LEO link. However, the impact of tropospheric scintillation in a LEO-LEO radio occultation geometry using frequencies above 10 GHz still has to be thoroughly investigated. In this paper we focus on the analysis of such effects, taking into account the fact that the formulations presented in the literature have to be modified in order to fit the specific problem under consideration. Specifically, an expression is derived for the variances of the amplitude and phase fluctuations of the wave, their spectrum and the correlation between fluctuations at different frequencies. In particular, the latter is extremely useful to evaluate the potential of the DSA approach through simulations whose results are reported in the last part of the paper.

  8. Glacial landscape evolution by subglacial quarrying: A multiscale computational approach

    NASA Astrophysics Data System (ADS)

    Ugelvig, Sofie V.; Egholm, David L.; Iverson, Neal R.

    2016-11-01

    Quarrying of bedrock is a primary agent of subglacial erosion. Although the mechanical theory behind the process has been studied for decades, it has proven difficult to formulate the governing principles so that large-scale landscape evolution models can be used to integrate erosion over time. The existing mechanical theory thus stands largely untested in its ability to explain postglacial topography. In this study we relate the physics of quarrying to long-term landscape evolution with a multiscale approach that connects meter-scale cavities to kilometer-scale glacial landscapes. By averaging the quarrying rate across many small-scale bedrock steps, we quantify how regional trends in basal sliding speed, effective pressure, and bed slope affect the rate of erosion. A sensitivity test indicates that a power law formulated in terms of these three variables provides an acceptable basis for quantifying regional-scale rates of quarrying. Our results highlight the strong influence of effective pressure, which intensifies quarrying by increasing the volume of the bed that is stressed by the ice and thereby the probability of rock failure. The resulting pressure dependency points to subglacial hydrology as a primary factor for influencing rates of quarrying and hence for shaping the bedrock topography under warm-based glaciers. When applied in a landscape evolution model, the erosion law for quarrying produces recognizable large-scale glacial landforms: U-shaped valleys, hanging valleys, and overdeepenings. The landforms produced are very similar to those predicted by more standard sliding-based erosion laws, but overall quarrying is more focused in valleys, and less effective at higher elevations.

  9. A 19-week exercise program for people with chronic stroke enhances bone geometry at the tibia: a peripheral quantitative computed tomography study

    PubMed Central

    Pang, Marco YC; Ashe, Maureen C.; Eng, Janice J.; McKay, Heather A.; Dawson, Andrew S.

    2011-01-01

    Introduction and Hypothesis Regular skeletal-loading exercise is an effective intervention to improve bone health in older individuals. However, little is known about the bone responses to exercise in people with stroke. Following a stroke, muscle atrophy and bone loss occurs. Diminished areal bone mineral density combined with an increased number of falls substantially enhances the risk for a fragility fracture. We undertook a randomized controlled intervention trial to assess the impact of a 19-week comprehensive exercise program on lower extremity bone health in people with chronic stroke. Methods Sixty-three community-dwelling individuals with chronic stroke were randomly allocated to either an intervention group or a control group. The intervention group participated in a 19-week thrice-weekly exercise program consisting of skeletal-loading, aerobic, strengthening and balance exercises. The control group completed a seated upper extremity exercise program. We used peripheral quantitative computed tomography (pQCT) to measure bone geometry and volumetric bone mineral density at the distal 4% and midshaft 50% of the tibia before and after the intervention. Results Following the exercise program, the intervention group had significantly more percent gain in trabecular bone content at the 4% site on the paretic side than the control group (p=0.048). At the 50% site on the paretic side, the intervention group also had significantly greater percent gain in cortical thickness (p=0.026) but not the polar stress strain index (p-SSI) when compared with the control group. However, no significant between-group difference was found in trabecular bone density (4% site) and cortical bone density (50% site) percent gain on the paretic side. No significant changes were observed in any variables on the non-paretic side at the 4% or 50% site. Conclusions This study provided some evidence that the 19-week comprehensive exercise program could have a positive impact on bone

  10. Computational Modeling Approaches for Studying Transverse Combustion Instability in a Multi-element Injector

    DTIC Science & Technology

    2015-01-01

    Technical Paper 3. DATES COVERED (From - To) January 2015-May 2015 4. TITLE AND SUBTITLE COMPUTATIONAL MODELING APPROACHES FOR STUDYING TRANSVERSE ...so that the effect of the transverse instability on the center study element can be examined parametrically. The second approach models the entire...APPROACHES FOR STUDYING TRANSVERSE COMBUSTION INSTABILITY IN A MULTI-ELEMENT INJECTOR M.E. Harvazinski1, K.J. Shipley2*, D.G. Talley1, V. Sankaran1, and

  11. Management of low back pain in computer users: A multidisciplinary approach

    PubMed Central

    Shete, Kiran M.; Suryawanshi, Prachi; Gandhi, Neha

    2012-01-01

    Background: Low back pain is a very common phenomenon in computer users. More than 80% people using computers for more than 4 h complain of back pain. Objective: To compare the effectiveness of multidisciplinary treatment approach and conventional treatment approach amongst computer users. Materials and Methods: A prospective interventional study was carried out at a private spine clinic amongst the computer users with the complaint of low back pain. The study participants were randomly distributed in two groups. The first group comprised the study participants treated by conventional approach and the second group was treated by multidisciplinary approach. Primary outcomes analyzed were pain intensity, sick leave availed, and quality of life. Statistical analysis was done using proportions, unpaired “t” test, and Wilcoxon signed-rank test. Results: Totally 44 study participants were randomly assigned to groups I and II, and each group had 22 study participants. Intensity of pain was reduced significantly in the group treated by multidisciplinary approach (t = 5.718; P = 0.0001). Similarly only 4 (19.19%) of the study participants of the group treated by multidisciplinary approach availed sick leave due to low back pain, while 14 (63.63%) study participants availed sick leave in the other group (P = 0.02). The quality of life amongst the study participants treated by multidisciplinary approach was significantly improved compared to the group treated by conventional approach (t = 7.037; P = 0.0001). Conclusion and Recommendation: The multidisciplinary treatment approach was better than the conventional treatment approach in low back pain cases when some factors like pain and quality of life were assessed. The multidisciplinary approach for treatment of low back pain should be promoted over conventional approach. Larger studies are required to confirm the findings in different settings. PMID:23741122

  12. CATIA-GDML geometry builder

    NASA Astrophysics Data System (ADS)

    Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Semennikov, A.

    2011-12-01

    Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. An original set of tools has been developed for building a GEANT4/ROOT compatible geometry in the CATIA CAD system and exchanging it with mentioned MC packages using GDML file format. A Special structure of a CATIA product tree, a wide range of primitives, different types of multiple volume instantiation, and supporting macros have been implemented.

  13. Emergent Hyperbolic Network Geometry.

    PubMed

    Bianconi, Ginestra; Rahmede, Christoph

    2017-02-07

    A large variety of interacting complex systems are characterized by interactions occurring between more than two nodes. These systems are described by simplicial complexes. Simplicial complexes are formed by simplices (nodes, links, triangles, tetrahedra etc.) that have a natural geometric interpretation. As such simplicial complexes are widely used in quantum gravity approaches that involve a discretization of spacetime. Here, by extending our knowledge of growing complex networks to growing simplicial complexes we investigate the nature of the emergent geometry of complex networks and explore whether this geometry is hyperbolic. Specifically we show that an hyperbolic network geometry emerges spontaneously from models of growing simplicial complexes that are purely combinatorial. The statistical and geometrical properties of the growing simplicial complexes strongly depend on their dimensionality and display the major universal properties of real complex networks (scale-free degree distribution, small-world and communities) at the same time. Interestingly, when the network dynamics includes an heterogeneous fitness of the faces, the growing simplicial complex can undergo phase transitions that are reflected by relevant changes in the network geometry.

  14. Emergent Hyperbolic Network Geometry

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Rahmede, Christoph

    2017-02-01

    A large variety of interacting complex systems are characterized by interactions occurring between more than two nodes. These systems are described by simplicial complexes. Simplicial complexes are formed by simplices (nodes, links, triangles, tetrahedra etc.) that have a natural geometric interpretation. As such simplicial complexes are widely used in quantum gravity approaches that involve a discretization of spacetime. Here, by extending our knowledge of growing complex networks to growing simplicial complexes we investigate the nature of the emergent geometry of complex networks and explore whether this geometry is hyperbolic. Specifically we show that an hyperbolic network geometry emerges spontaneously from models of growing simplicial complexes that are purely combinatorial. The statistical and geometrical properties of the growing simplicial complexes strongly depend on their dimensionality and display the major universal properties of real complex networks (scale-free degree distribution, small-world and communities) at the same time. Interestingly, when the network dynamics includes an heterogeneous fitness of the faces, the growing simplicial complex can undergo phase transitions that are reflected by relevant changes in the network geometry.

  15. Emergent Hyperbolic Network Geometry

    PubMed Central

    Bianconi, Ginestra; Rahmede, Christoph

    2017-01-01

    A large variety of interacting complex systems are characterized by interactions occurring between more than two nodes. These systems are described by simplicial complexes. Simplicial complexes are formed by simplices (nodes, links, triangles, tetrahedra etc.) that have a natural geometric interpretation. As such simplicial complexes are widely used in quantum gravity approaches that involve a discretization of spacetime. Here, by extending our knowledge of growing complex networks to growing simplicial complexes we investigate the nature of the emergent geometry of complex networks and explore whether this geometry is hyperbolic. Specifically we show that an hyperbolic network geometry emerges spontaneously from models of growing simplicial complexes that are purely combinatorial. The statistical and geometrical properties of the growing simplicial complexes strongly depend on their dimensionality and display the major universal properties of real complex networks (scale-free degree distribution, small-world and communities) at the same time. Interestingly, when the network dynamics includes an heterogeneous fitness of the faces, the growing simplicial complex can undergo phase transitions that are reflected by relevant changes in the network geometry. PMID:28167818

  16. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  17. Helical Gears with Circular Arc Teeth: Generation, Geometry, Precision and Adjustment to Errors, Computer Aided Simulation of Conditions of Meshing, and Bearing Contact.

    DTIC Science & Technology

    1987-10-01

    09 171 HELICAL GEARS NITH CIRCULAR ARC TEETH: GENERATION / GEOMETRY PRECISION AND (U) ILLINOIS UNIV AT CHICAGO CIRCLE F L LITYN ET AL OCT 87 NASA-CR...34",’ . " "" . ".- , ", ", " ",, -. ". , ’.,.,L jILI ..EILE.GO ;. - - . .. . AVSCOM .--NASA Contractor Report�.--TcnclRpr ZC1 i < Helical Gears With Circular Arc Teeth: Generation...AVSCOM Contractor Report 4089 Technical Report 87-C-18 Helical Gears With Circular Arc Teeth: Generation, Geometry, Precision and Adjustment to Errors

  18. Computer-aided analysis of Landsat-1 MSS data - A comparison of three approaches, including a 'modified clustering' approach

    NASA Technical Reports Server (NTRS)

    Fleming, M. D.; Berkebile, J. S.; Hoffer, R. M.

    1975-01-01

    Three approaches for analyzing Landsat-1 data from Ludwig Mountain in the San Juan Mountain range in Colorado are considered. In the 'supervised' approach the analyst selects areas of known spectral cover types and specifies these to the computer as training fields. Statistics are obtained for each cover type category and the data are classified. Such classifications are called 'supervised' because the analyst has defined specific areas of known cover types. The second approach uses a clustering algorithm which divides the entire training area into a number of spectrally distinct classes. Because the analyst need not define particular portions of the data for use but has only to specify the number of spectral classes into which the data is to be divided, this classification is called 'nonsupervised'. A hybrid method which selects training areas of known cover type but then uses the clustering algorithm to refine the data into a number of unimodal spectral classes is called the 'modified-supervised' approach.

  19. Non-invasive computation of aortic pressure maps: a phantom-based study of two approaches

    NASA Astrophysics Data System (ADS)

    Delles, Michael; Schalck, Sebastian; Chassein, Yves; Müller, Tobias; Rengier, Fabian; Speidel, Stefanie; von Tengg-Kobligk, Hendrik; Kauczor, Hans-Ulrich; Dillmann, Rüdiger; Unterhinninghofen, Roland

    2014-03-01

    Patient-specific blood pressure values in the human aorta are an important parameter in the management of cardiovascular diseases. A direct measurement of these values is only possible by invasive catheterization at a limited number of measurement sites. To overcome these drawbacks, two non-invasive approaches of computing patient-specific relative aortic blood pressure maps throughout the entire aortic vessel volume are investigated by our group. The first approach uses computations from complete time-resolved, three-dimensional flow velocity fields acquired by phasecontrast magnetic resonance imaging (PC-MRI), whereas the second approach relies on computational fluid dynamics (CFD) simulations with ultrasound-based boundary conditions. A detailed evaluation of these computational methods under realistic conditions is necessary in order to investigate their overall robustness and accuracy as well as their sensitivity to certain algorithmic parameters. We present a comparative study of the two blood pressure computation methods in an experimental phantom setup, which mimics a simplified thoracic aorta. The comparative analysis includes the investigation of the impact of algorithmic parameters on the MRI-based blood pressure computation and the impact of extracting pressure maps in a voxel grid from the CFD simulations. Overall, a very good agreement between the results of the two computational approaches can be observed despite the fact that both methods used completely separate measurements as input data. Therefore, the comparative study of the presented work indicates that both non-invasive pressure computation methods show an excellent robustness and accuracy and can therefore be used for research purposes in the management of cardiovascular diseases.

  20. Integral geometry and holography

    DOE PAGES

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulkmore » curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.« less

  1. Integral geometry and holography

    SciTech Connect

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; Sully, James

    2015-10-27

    We present a mathematical framework which underlies the connection between information theory and the bulk spacetime in the AdS3/CFT2 correspondence. A key concept is kinematic space: an auxiliary Lorentzian geometry whose metric is defined in terms of conditional mutual informations and which organizes the entanglement pattern of a CFT state. When the field theory has a holographic dual obeying the Ryu-Takayanagi proposal, kinematic space has a direct geometric meaning: it is the space of bulk geodesics studied in integral geometry. Lengths of bulk curves are computed by kinematic volumes, giving a precise entropic interpretation of the length of any bulk curve. We explain how basic geometric concepts -- points, distances and angles -- are reflected in kinematic space, allowing one to reconstruct a large class of spatial bulk geometries from boundary entanglement entropies. In this way, kinematic space translates between information theoretic and geometric descriptions of a CFT state. As an example, we discuss in detail the static slice of AdS3 whose kinematic space is two-dimensional de Sitter space.

  2. Reducing the computational complexity of information theoretic approaches for reconstructing gene regulatory networks.

    PubMed

    Qiu, Peng; Gentles, Andrew J; Plevritis, Sylvia K

    2010-02-01

    Information theoretic approaches are increasingly being used for reconstructing regulatory networks from microarray data. These approaches start by computing the pairwise mutual information (MI) between all gene pairs. The resulting MI matrix is then manipulated to identify regulatory relationships. A barrier to these approaches is the time-consuming step of computing the MI matrix. We present a method to reduce this computation time. We apply spectral analysis to re-order the genes, so that genes that share regulatory relationships are more likely to be placed close to each other. Then, using a "sliding window" approach with appropriate window size and step size, we compute the MI for the genes within the sliding window, and the remainder is assumed to be zero. Using both simulated data and microarray data, we demonstrate that our method does not incur performance loss in regions of high-precision and low-recall, while the computational time is significantly lowered. The proposed method can be used with any method that relies on the mutual information to reconstruct networks.

  3. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  4. Decreasing Computer Anxiety and Increasing Computer Usage among Early Childhood Education Majors through a Hands-On Approach in a Nonthreatening Environment.

    ERIC Educational Resources Information Center

    Castleman, Jacquelyn B.

    This practicum was designed to lessen the computer anxiety of early childhood education majors enrolled in General Curriculum or General Methods courses, to assist them in learning more about computer applications, and to increase the amount of time spent using computers. Weekly guidelines were given to the students, and a hands-on approach was…

  5. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    NASA Astrophysics Data System (ADS)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  6. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.

  7. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning

    PubMed Central

    Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807

  8. An approach to experimental evaluation of real-time fault-tolerant distributed computing schemes

    NASA Technical Reports Server (NTRS)

    Kim, K. H.

    1989-01-01

    A testbed-based approach to the evaluation of fault-tolerant distributed computing schemes is discussed. The approach is based on experimental incorporation of system structuring and design techniques into real-time distributed-computing testbeds centered around tightly coupled microcomputer networks. The effectiveness of this approach has been experimentally confirmed. Primary advantages of this approach include the accuracy of the timing and logical-complexity data and the degree of assurance of the practical effectiveness of the scheme evaluated. Various design issues encountered in the course of establishing the network testbed facilities are discussed, along with their augmentation to support some experiments. The shortcomings of the testbeds are also discussed together with the desired extensions of the testbeds.

  9. Integrated geometry and grid generation system for complex configurations

    NASA Technical Reports Server (NTRS)

    Akdag, Vedat; Wulf, Armin

    1992-01-01

    A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.

  10. A Computational Approach for Near-Optimal Path Planning and Guidance for Systems with Nonholonomic Contraints

    DTIC Science & Technology

    2010-04-14

    novel methods for discretization based on Legendre-Gauss and Legendre-Gauss- Radau quadrature points. Using this approach, the finite-dimensional...Gauss- Radau quadrature points. Using this approach, the finite-dimensional approximation is kept low-dimensional, potentially enabling near real...Costate Estimation of Finite-Horizon and Infinite-Horizon Optimal Control Problems Using a Radau Pseudospectral Method,” Computational Optimization and

  11. Computational Modeling Approaches for Studying Transverse Combustion Instability in a Multi-Element Injector (Briefing Charts)

    DTIC Science & Technology

    2015-05-01

    Charts 3. DATES COVERED (From - To) May 2015- June 2015 4. TITLE AND SUBTITLE COMPUTATIONAL MODELING APPROACHES FOR STUDYING TRANSVERSE COMBUSTION...an artificial forcing term. The forcing amplitude can be adjusted so that the effect of the transverse instability on the center study element can be...Approaches for Studying Transverse Combustion Instability in a Multi-element Injector Matt Harvazinski1, Kevin Shipley2, Doug Talley1, Venke Sankaran1

  12. Capstone: A Geometry-Centric Platform to Enable Physics-Based Simulation and Design of Systems

    DTIC Science & Technology

    2015-10-05

    geometry at runtime for scalable and accurate a-posteriori mesh adaptation. HPCMP CREATE, —Geometric modeling, mesh generation, computer- aided -engineering...organizations as part of the DoD HPCMP CREATETM Program [6]. Index Terms—Geometric modeling, mesh generation, computer- aided -engineering, engineered...needs based on the physics and the requirements of the discretization approach used. While computer- aided -design (CAD) systems have been used

  13. Computer-aided fit testing: an approach for examining the user/equipment interface

    NASA Astrophysics Data System (ADS)

    Corner, Brian D.; Beecher, Robert M.; Paquette, Steven

    1997-03-01

    Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.

  14. Numerical characterization of nonlinear dynamical systems using parallel computing: The role of GPUS approach

    NASA Astrophysics Data System (ADS)

    Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis

    2016-08-01

    The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.

  15. A dual-energy approach for improvement of the measurement consistency in computed tomography

    NASA Astrophysics Data System (ADS)

    Jansson, Anton; Pejryd, Lars

    2016-11-01

    Computed tomography is increasingly adopted by industries for metrological and material evaluation. The technology enables new measurement possibilities, while also challenging old measurement methods in their established territories. There are, however, uncertainties related with the computed tomography method. Investigation of multi-material components with, in particular, varying material thickness can result in unreliable measurements. In this paper the effects of multi-materials, and differing material thickness, on computed tomography measurement consistency has been studied. The aim of the study was to identify measurement inconsistencies and attempt to correct these with a dual-energy computed tomography approach. In this pursuit, a multi-material phantom was developed, containing reliable measurement points and custom-ability with regards to material combinations. A dual-energy method was developed and implemented using sequential acquisition and pre-reconstruction fusing of projections. It was found that measurements made on the multi-material phantom with a single computed tomography scan were highly inconsistent. It was also found that the dual-energy approach was able to reduce the measurement inconsistencies. However, more work is required with the automation of the dual-energy approach presented in this paper since it is highly operator dependant.

  16. Poisson-Riemannian geometry

    NASA Astrophysics Data System (ADS)

    Beggs, Edwin J.; Majid, Shahn

    2017-04-01

    We study noncommutative bundles and Riemannian geometry at the semiclassical level of first order in a deformation parameter λ, using a functorial approach. This leads us to field equations of 'Poisson-Riemannian geometry' between the classical metric, the Poisson bracket and a certain Poisson-compatible connection needed as initial data for the quantisation of the differential structure. We use such data to define a functor Q to O(λ2) from the monoidal category of all classical vector bundles equipped with connections to the monoidal category of bimodules equipped with bimodule connections over the quantised algebra. This is used to 'semiquantise' the wedge product of the exterior algebra and in the Riemannian case, the metric and the Levi-Civita connection in the sense of constructing a noncommutative geometry to O(λ2) . We solve our field equations for the Schwarzschild black-hole metric under the assumption of spherical symmetry and classical dimension, finding a unique solution and the necessity of nonassociativity at order λ2, which is similar to previous results for quantum groups. The paper also includes a nonassociative hyperboloid, nonassociative fuzzy sphere and our previously algebraic bicrossproduct model.

  17. Noncommutative geometry and arithmetics

    NASA Astrophysics Data System (ADS)

    Almeida, P.

    2009-09-01

    We intend to illustrate how the methods of noncommutative geometry are currently used to tackle problems in class field theory. Noncommutative geometry enables one to think geometrically in situations in which the classical notion of space formed of points is no longer adequate, and thus a “noncommutative space” is needed; a full account of this approach is given in [3] by its main contributor, Alain Connes. The class field theory, i.e., number theory within the realm of Galois theory, is undoubtedly one of the main achievements in arithmetics, leading to an important algebraic machinery; for a modern overview, see [23]. The relationship between noncommutative geometry and number theory is one of the many themes treated in [22, 7-9, 11], a small part of which we will try to put in a more down-to-earth perspective, illustrating through an example what should be called an “application of physics to mathematics,” and our only purpose is to introduce nonspecialists to this beautiful area.

  18. Relationships among Taiwanese Children's Computer Game Use, Academic Achievement and Parental Governing Approach

    ERIC Educational Resources Information Center

    Yeh, Duen-Yian; Cheng, Ching-Hsue

    2016-01-01

    This study examined the relationships among children's computer game use, academic achievement and parental governing approach to propose probable answers for the doubts of Taiwanese parents. 355 children (ages 11-14) were randomly sampled from 20 elementary schools in a typically urbanised county in Taiwan. Questionnaire survey (five questions)…

  19. The Computational Experiment and Its Effects on Approach to Learning and Beliefs on Physics

    ERIC Educational Resources Information Center

    Psycharis, Sarantos

    2011-01-01

    Contemporary instructional approaches expect students to be active producers of knowledge. This leads to the need for creation of instructional tools and tasks that can offer students opportunities for active learning. This study examines the effect of a computational experiment as an instructional tool-for Grade 12 students, using a computer…

  20. Laplace transform approach for solving integral equations using computer algebra system

    NASA Astrophysics Data System (ADS)

    Paneva-Konovska, Jordanka; Nikolova, Yanka

    2016-12-01

    The Laplace transform method, along with Computer Algebra Systems (CAS) "Maple" v. 13, are extremely successfully applied for solving a class of integral equations with an arbitrary order, including fractional order integral equations. The combining of both powerful approaches allows students more quickly, enjoyable and thoroughly to master the material.