Science.gov

Sample records for computational geometry approach

  1. Geometry in the Computer Age.

    ERIC Educational Resources Information Center

    Scott, Paul

    1988-01-01

    Discusses the use of computer graphics in the teaching of geometry. Describes five types of geometry: Euclidean geometry, transformation geometry, coordinate geometry, three-dimensional geometry, and geometry of convex sets. (YP)

  2. A computational approach to modeling cellular-scale blood flow in complex geometry

    NASA Astrophysics Data System (ADS)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  3. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  4. A computational approach to continuum damping of Alfven waves in two and three-dimensional geometry

    SciTech Connect

    Koenies, Axel; Kleiber, Ralf

    2012-12-15

    While the usual way of calculating continuum damping of global Alfven modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  5. A Comparative Study on the Effectiveness of the Computer Assisted Method and the Interactionist Approach to Teaching Geometry Shapes to Young Children

    ERIC Educational Resources Information Center

    Zaranis, Nicholas; Synodi, Evanthia

    2017-01-01

    The purpose of this study is to compare and evaluate the effectiveness of computer assisted teaching of geometry shapes and an interactionist approach to teaching geometry in kindergarten versus other more traditional teaching methods. Our research compares the improvement of the children's geometrical competence using two teaching approaches. The…

  6. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  7. A Whirlwind Tour of Computational Geometry.

    ERIC Educational Resources Information Center

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  8. A computational approach to continuum damping of Alfvén waves in two and three-dimensional geometry

    NASA Astrophysics Data System (ADS)

    Könies, Axel; Kleiber, Ralf

    2012-12-01

    While the usual way of calculating continuum damping of global Alfvén modes is the introduction of a small artificial resistivity, we present a computational approach to the problem based on a suitable path of integration in the complex plane. This approach is implemented by the Riccati shooting method and it is shown that it can be transferred to the Galerkin method used in three-dimensional ideal magneto-hydrodynamics (MHD) codes. The new approach turns out to be less expensive with respect to resolution and computation time than the usual one. We present an application to large aspect ratio tokamak and stellarator equilibria retaining a few Fourier harmonics only and calculate eigenfunctions and continuum damping rates. These may serve as an input for kinetic MHD hybrid models making it possible to bypass the problem of having singularities on the path of integration on one hand and considering continuum damping on the other.

  9. Computational Approaches to the Determination of the Molecular Geometry of Acrolein in its T_1(n,π*) State

    NASA Astrophysics Data System (ADS)

    McAnally, Michael O.; Hlavacek, Nikolaus C.; Drucker, Stephen

    2012-06-01

    The spectroscopically derived inertial constants for acrolein (propenal) in its T_1(n,π*) state were used to test predictions from a variety of computational methods. One focus was on multiconfigurational methods, such as CASSCF and CASPT2, that are applicable to excited states. We also examined excited-state methods that utilize single reference configurations, including EOM-EE-CCSD and TD-PBE0. Finally, we applied unrestricted ground-state techniques, such as UCCSD(T) and the more economical UPBE0 method, to the T_1(n,π*) excited state under the constraint of C_s symmetry. The unrestricted ground-state methods are applicable because at a planar geometry, the T_1(n,π*) state of acrolein is the lowest-energy state of its spin multiplicity. Each of the above methods was used with a triple zeta quality basis set to optimize the T_1(n,π*) geometry. This procedure resulted in the following sets of inertial constants: Inertial constants (cm-1) of acrolein in its T_1(n,π*) state Method A B C Method A B C CASPT2(6,5) 1.667 0.1491 0.1368 UCCSD(T)^b 1.668 0.1480 0.1360 CASSCF(6,5) 1.667 0.1491 0.1369 UPBE0 1.699 0.1487 0.1367 EOM-EE-CCSD 1.675 0.1507 0.1383 TD-PBE0 1.719 0.1493 0.1374 Experiment^a 1.662 0.1485 0.1363 The two multiconfigurational methods produce the same inertial constants, and those constants agree closely with experiment. However the sets of computed bond lengths differ significantly for the two methods. In the CASSCF calculation, the lengthening of the C=O and C=C bonds and the shortening of the C--C bond are more pronounced than in CASPT2. O. S. Bokareva et al., Int. J. Quant. Chem. {108}, 2719 (2008).

  10. A cell-centered Lagrangian finite volume approach for computing elasto-plastic response of solids in cylindrical axisymmetric geometries

    NASA Astrophysics Data System (ADS)

    Sambasivan, Shiv Kumar; Shashkov, Mikhail J.; Burton, Donald E.

    2013-03-01

    A finite volume cell-centered Lagrangian formulation is presented for solving large deformation problems in cylindrical axisymmetric geometries. Since solid materials can sustain significant shear deformation, evolution equations for stress and strain fields are solved in addition to mass, momentum and energy conservation laws. The total strain-rate realized in the material is split into an elastic and plastic response. The elastic and plastic components in turn are modeled using hypo-elastic theory. In accordance with the hypo-elastic model, a predictor-corrector algorithm is employed for evolving the deviatoric component of the stress tensor. A trial elastic deviatoric stress state is obtained by integrating a rate equation, cast in the form of an objective (Jaumann) derivative, based on Hooke's law. The dilatational response of the material is modeled using an equation of state of the Mie-Grüneisen form. The plastic deformation is accounted for via an iterative radial return algorithm constructed from the J2 von Mises yield condition. Several benchmark example problems with non-linear strain hardening and thermal softening yield models are presented. Extensive comparisons with representative Eulerian and Lagrangian hydrocodes in addition to analytical and experimental results are made to validate the current approach.

  11. Geometry of quantum computation with qutrits.

    PubMed

    Li, Bin; Yu, Zu-Huan; Fei, Shao-Ming

    2013-01-01

    Determining the quantum circuit complexity of a unitary operation is an important problem in quantum computation. By using the mathematical techniques of Riemannian geometry, we investigate the efficient quantum circuits in quantum computation with n qutrits. We show that the optimal quantum circuits are essentially equivalent to the shortest path between two points in a certain curved geometry of SU(3(n)). As an example, three-qutrit systems are investigated in detail.

  12. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  13. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  14. Quadric solids and computational geometry

    SciTech Connect

    Emery, J.D.

    1980-07-25

    As part of the CAD-CAM development project, this report discusses the mathematics underlying the program QUADRIC, which does computations on objects modeled as Boolean combinations of quadric half-spaces. Topics considered include projective space, quadric surfaces, polars, affine transformations, the construction of solids, shaded image, the inertia tensor, moments, volume, surface integrals, Monte Carlo integration, and stratified sampling. 1 figure.

  15. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  16. Computational fluid dynamics using CATIA created geometry

    SciTech Connect

    Gengler, J.E.

    1989-01-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  17. Computational fluid dynamics using CATIA created geometry

    NASA Astrophysics Data System (ADS)

    Gengler, Jeanne E.

    1989-07-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  18. Frequency-selective near-field radiative heat transfer between photonic crystal slabs: a computational approach for arbitrary geometries and materials.

    PubMed

    Rodriguez, Alejandro W; Ilic, Ognjen; Bermel, Peter; Celanovic, Ivan; Joannopoulos, John D; Soljačić, Marin; Johnson, Steven G

    2011-09-09

    We demonstrate the possibility of achieving enhanced frequency-selective near-field radiative heat transfer between patterned (photonic-crystal) slabs at designable frequencies and separations, exploiting a general numerical approach for computing heat transfer in arbitrary geometries and materials based on the finite-difference time-domain method. Our simulations reveal a tradeoff between selectivity and near-field enhancement as the slab-slab separation decreases, with the patterned heat transfer eventually reducing to the unpatterned result multiplied by a fill factor (described by a standard proximity approximation). We also find that heat transfer can be further enhanced at selective frequencies when the slabs are brought into a glide-symmetric configuration, a consequence of the degeneracies associated with the nonsymmorphic symmetry group.

  19. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors

    NASA Astrophysics Data System (ADS)

    Cenek, Martin; Dahl, Spencer K.

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  20. Computation of three-phase capillary entry pressures and arc menisci configurations in pore geometries from 2D rock images: A combinatorial approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yingfang; Helland, Johan Olav; Hatzignatiou, Dimitrios G.

    2014-07-01

    We present a semi-analytical, combinatorial approach to compute three-phase capillary entry pressures for gas invasion into pore throats with constant cross-sections of arbitrary shapes that are occupied by oil and/or water. For a specific set of three-phase capillary pressures, geometrically allowed gas/oil, oil/water and gas/water arc menisci are determined by moving two circles in opposite directions along the pore/solid boundary for each fluid pair such that the contact angle is defined at the front circular arcs. Intersections of the two circles determine the geometrically allowed arc menisci for each fluid pair. The resulting interfaces are combined systematically to allow for all geometrically possible three-phase configuration changes. The three-phase extension of the Mayer and Stowe - Princen method is adopted to calculate capillary entry pressures for all determined configuration candidates, from which the most favorable gas invasion configuration is determined. The model is validated by comparing computed three-phase capillary entry pressures and corresponding fluid configurations with analytical solutions in idealized triangular star-shaped pores. It is demonstrated that the model accounts for all scenarios that have been analyzed previously in these shapes. Finally, three-phase capillary entry pressures and associated fluid configurations are computed in throat cross-sections extracted from segmented SEM images of Bentheim sandstone. The computed gas/oil capillary entry pressures account for the expected dependence of oil/water capillary pressure in spreading and non-spreading fluid systems at the considered wetting conditions. Because these geometries are irregular and include constrictions, we introduce three-phase displacements that have not been identified previously in pore-network models that are based on idealized pore shapes. However, in the limited number of pore geometries considered in this work, we find that the favorable displacements are

  1. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  2. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  3. Geometry: A Flow Proof Approach.

    ERIC Educational Resources Information Center

    McMurray, Robert

    The inspiration for this text was provided by an exposure to the flow proof approach to a proof format as opposed to the conventional two-column approach. Historical background is included, to provide a frame of reference to give the student an appreciation of the subject. The basic constructions are introduced early and briefly, to aid the…

  4. A fractal approach to the dark silicon problem: A comparison of 3D computer architectures - Standard slices versus fractal Menger sponge geometry

    NASA Astrophysics Data System (ADS)

    Herrmann, Richard

    2015-01-01

    The dark silicon problem, which limits the power-growth of future computer generations, is interpreted as a heat energy transport problem when increasing the energy emitting surface area within a given volume. A comparison of two 3D-configuration models, namely a standard slicing and a fractal surface generation within the Menger sponge geometry is presented. It is shown, that for iteration orders $n>3$ the fractal model shows increasingly better thermal behavior. As a consequence cooling problems may be minimized by using a fractal architecture. Therefore the Menger sponge geometry is a good example for fractal architectures applicable not only in computer science, but also e.g. in chemistry when building chemical reactors, optimizing catalytic processes or in sensor construction technology building highly effective sensors for toxic gases or water analysis.

  5. A computer program for analyzing channel geometry

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.

    1985-01-01

    The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)

  6. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  7. Extraction of human stomach using computational geometry

    NASA Astrophysics Data System (ADS)

    Aisaka, Kazuo; Arai, Kiyoshi; Tsutsui, Kumiko; Hashizume, Akihide

    1991-06-01

    This paper presents a method for extracting the profile of the stomach by computational geometry. The stomach is difficult to recognize from an X-ray because of its elasticity. Global information of the stomach shape is required for recognition. The method has three steps. In the first step, the edge is enhanced, and then edge pieces are found as candidates for the border. Because the resulting border is almost always incomplete, a method for connecting the pieces is required. The second step uses computational geometry to create the global structure from the edge pieces. A Delaunay graph is drawn from the end points of the pieces. This enables us to decide which pieces are most likely to connect. The third step uses the shape of a stomach to find the best sequence of pieces. The knowledge is described in simple LISP functions. Because a Delaunay graph is planar, we can reduce the number of candidate pieces while searching for the most likely sequence. We applied this method to seven stomach pictures taken by the double contrast method and found the greater curvature in six cases. Enhancing the shape knowledge will increase the number of recognizable parts.

  8. Prime factorization using quantum annealing and computational algebraic geometry

    PubMed Central

    Dridi, Raouf; Alghassi, Hedayat

    2017-01-01

    We investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Gröbner bases. We present a novel autonomous algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over 200000, the largest number factored to date using a quantum processor. We also explain how Gröbner bases can be used to reduce the degree of Hamiltonians. PMID:28220854

  9. Prime factorization using quantum annealing and computational algebraic geometry

    NASA Astrophysics Data System (ADS)

    Dridi, Raouf; Alghassi, Hedayat

    2017-02-01

    We investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Gröbner bases. We present a novel autonomous algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over 200000, the largest number factored to date using a quantum processor. We also explain how Gröbner bases can be used to reduce the degree of Hamiltonians.

  10. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    PubMed

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the

  11. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales

    PubMed Central

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.; Ontiveros, Sinué; Tosello, Guido

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component’s calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from

  12. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Durmus, Soner; Karakirik, Erol

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any computer software…

  13. An Alternative Approach to Logo-Based Geometry

    ERIC Educational Resources Information Center

    Karakirik, Erol; Durmus, Soner

    2005-01-01

    Geometry is an important branch of mathematics. Geometry curriculum can be enriched by using different Technologies such as graphing calculators and computers. Logo-based different software packages aim to improve conceptual understanding in geometry. The goals of this paper are i) to present theoretical foundations of any compute software…

  14. Turbulent flow computations in complex geometries

    NASA Astrophysics Data System (ADS)

    Burns, A. D.; Clarke, D. S.; Jones, I. P.; Simcox, S.; Wilkes, N. S.

    The nonstaggered-grid Navier-Stokes algorithm of Rhie and Chow (1983) and its implementation in the FLOW3D code (Burns et al., 1987) are described, with a focus on their application to problems involving complex geometries. Results for the flow in a tile-lined burner and for the flow over an automobile model are presented in extensive graphs and discussed in detail, and the advantages of supercomputer vectorization of the code are considered.

  15. Teaching Geometry: An Experiential and Artistic Approach.

    ERIC Educational Resources Information Center

    Ogletree, Earl J.

    The view that geometry should be taught at every grade level is promoted. Primary and elementary school children are thought to rarely have any direct experience with geometry, except on an incidental basis. Children are supposed to be able to learn geometry rather easily, so long as the method and content are adapted to their development and…

  16. Aircraft geometry verification with enhanced computer generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  17. Aircraft geometry verification with enhanced computer-generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer-generated, color-shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer-aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color-shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color-shaded display is presented. The results include examples of color-shaded displays, which are contrasted with wire-frame-type displays. The examples also show the use of mapped surface pressures in terms of color-shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  18. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    landmark paper titled “Most tensor problems are NP-hard” (see [14] in Section 3) in the Journal of the ACM , the premier journal in Computer Science...Multi-Valued Data, Springer-Verlag, Berlin Heidel- berg, 2014. [14] C. J. Hillar and L.-H. Lim, “Most tensor problems are NP-hard,” J. ACM , 60 (2013...Data, Springer-Verlag, Berlin Heidelberg, 2014. C.J. Hillar and L.-H. Lim, "Most tensor problems are NP-hard," Journal of the ACM , 60 (2013), no. 6, Art

  19. Computational modeling of geometry dependent phonon transport in silicon nanostructures

    NASA Astrophysics Data System (ADS)

    Cheney, Drew A.

    Recent experiments have demonstrated that thermal properties of semiconductor nanostructures depend on nanostructure boundary geometry. Phonons are quantized mechanical vibrations that are the dominant carrier of heat in semiconductor materials and their aggregate behavior determine a nanostructure's thermal performance. Phonon-geometry scattering processes as well as waveguiding effects which result from coherent phonon interference are responsible for the shape dependence of thermal transport in these systems. Nanoscale phonon-geometry interactions provide a mechanism by which nanostructure geometry may be used to create materials with targeted thermal properties. However, the ability to manipulate material thermal properties via controlling nanostructure geometry is contingent upon first obtaining increased theoretical understanding of fundamental geometry induced phonon scattering processes and having robust analytical and computational models capable of exploring the nanostructure design space, simulating the phonon scattering events, and linking the behavior of individual phonon modes to overall thermal behavior. The overall goal of this research is to predict and analyze the effect of nanostructure geometry on thermal transport. To this end, a harmonic lattice-dynamics based atomistic computational modeling tool was created to calculate phonon spectra and modal phonon transmission coefficients in geometrically irregular nanostructures. The computational tool is used to evaluate the accuracy and regimes of applicability of alternative computational techniques based upon continuum elastic wave theory. The model is also used to investigate phonon transmission and thermal conductance in diameter modulated silicon nanowires. Motivated by the complexity of the transmission results, a simplified model based upon long wavelength beam theory was derived and helps explain geometry induced phonon scattering of low frequency nanowire phonon modes.

  20. Grid generation and inviscid flow computation about aircraft geometries

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1989-01-01

    Grid generation and Euler flow about fighter aircraft are described. A fighter aircraft geometry is specified by an area ruled fuselage with an internal duct, cranked delta wing or strake/wing combinations, canard and/or horizontal tail surfaces, and vertical tail surfaces. The initial step before grid generation and flow computation is the determination of a suitable grid topology. The external grid topology that has been applied is called a dual-block topology which is a patched C (exp 1) continuous multiple-block system where inner blocks cover the highly-swept part of a cranked wing or strake, rearward inner-part of the wing, and tail components. Outer-blocks cover the remainder of the fuselage, outer-part of the wing, canards and extend to the far field boundaries. The grid generation is based on transfinite interpolation with Lagrangian blending functions. This procedure has been applied to the Langley experimental fighter configuration and a modified F-18 configuration. Supersonic flow between Mach 1.3 and 2.5 and angles of attack between 0 degrees and 10 degrees have been computed with associated Euler solvers based on the finite-volume approach. When coupling geometric details such as boundary layer diverter regions, duct regions with inlets and outlets, or slots with the general external grid, imposing C (exp 1) continuity can be extremely tedious. The approach taken here is to patch blocks together at common interfaces where there is no grid continuity, but enforce conservation in the finite-volume solution. The key to this technique is how to obtain the information required for a conservative interface. The Ramshaw technique which automates the computation of proportional areas of two overlapping grids on a planar surface and is suitable for coding was used. Researchers generated internal duct grids for the Langley experimental fighter configuration independent of the external grid topology, with a conservative interface at the inlet and outlet.

  1. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID

  2. Using Computer-Assisted Multiple Representations in Learning Geometry Proofs

    ERIC Educational Resources Information Center

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Hsi-Hsun; Cheng, Ying-Hao

    2011-01-01

    Geometry theorem proving involves skills that are difficult to learn. Instead of working with abstract and complicated representations, students might start with concrete, graphical representations. A proof tree is a graphical representation of a formal proof, with each node representing a proposition or given conditions. A computer-assisted…

  3. The Effects of Instructional Practices on Computation and Geometry Achievement.

    ERIC Educational Resources Information Center

    DeVaney, Thomas A.

    The purpose of this study was to examine the relationships between classroom instructional practices and computation and geometry achievement. Relationships between mathematics achievement and classroom characteristics were also explored. The sample of 1,032 students and their teachers (n=147) was selected from the 1992 Trial State Mathematics…

  4. Using Computer-Assisted Multiple Representations in Learning Geometry Proofs

    ERIC Educational Resources Information Center

    Wong, Wing-Kwong; Yin, Sheng-Kai; Yang, Hsi-Hsun; Cheng, Ying-Hao

    2011-01-01

    Geometry theorem proving involves skills that are difficult to learn. Instead of working with abstract and complicated representations, students might start with concrete, graphical representations. A proof tree is a graphical representation of a formal proof, with each node representing a proposition or given conditions. A computer-assisted…

  5. Investigating the geometry of pig airways using computed tomography

    NASA Astrophysics Data System (ADS)

    Mansy, Hansen A.; Azad, Md Khurshidul; McMurray, Brandon; Henry, Brian; Royston, Thomas J.; Sandler, Richard H.

    2015-03-01

    Numerical modeling of sound propagation in the airways requires accurate knowledge of the airway geometry. These models are often validated using human and animal experiments. While many studies documented the geometric details of the human airways, information about the geometry of pig airways is scarcer. In addition, the morphology of animal airways can be significantly different from that of humans. The objective of this study is to measure the airway diameter, length and bifurcation angles in domestic pigs using computed tomography. After imaging the lungs of 3 pigs, segmentation software tools were used to extract the geometry of the airway lumen. The airway dimensions were then measured from the resulting 3 D models for the first 10 airway generations. Results showed that the size and morphology of the airways of different animals were similar. The measured airway dimensions were compared with those of the human airways. While the trachea diameter was found to be comparable to the adult human, the diameter, length and branching angles of other airways were noticeably different from that of humans. For example, pigs consistently had an early airway branching from the trachea that feeds the superior (top) right lung lobe proximal to the carina. This branch is absent in the human airways. These results suggested that the human geometry may not be a good approximation of the pig airways and may contribute to increasing the errors when the human airway geometric values are used in computational models of the pig chest.

  6. Computational geometry assessment for morphometric analysis of the mandible.

    PubMed

    Raith, Stefan; Varga, Viktoria; Steiner, Timm; Hölzle, Frank; Fischer, Horst

    2017-01-01

    This paper presents a fully automated algorithm for geometry assessment of the mandible. Anatomical landmarks could be reliably detected and distances were statistically evaluated with principal component analysis. The method allows for the first time to generate a mean mandible shape with statistically valid geometrical variations based on a large set of 497 CT-scans of human mandibles. The data may be used in bioengineering for designing novel oral implants, for planning of computer-guided surgery, and for the improvement of biomechanical models, as it is shown that commercially available mandible replicas differ significantly from the mean of the investigated population.

  7. Computer aided design and analysis of gear tooth geometry

    NASA Technical Reports Server (NTRS)

    Chang, S. H.; Huston, R. L.

    1987-01-01

    A simulation method for gear hobbing and shaping of straight and spiral bevel gears is presented. The method is based upon an enveloping theory for gear tooth profile generation. The procedure is applicable in the computer aided design of standard and nonstandard tooth forms. An inverse procedure for finding a conjugate gear tooth profile is presented for arbitrary cutter geometry. The kinematic relations for the tooth surfaces of straight and spiral bevel gears are proposed. The tooth surface equations for these gears are formulated in a manner suitable for their automated numerical development and solution.

  8. The 3-D General Geometry PIC Software for Distributed Memory MIMD Computers; EM Software Specification

    DTIC Science & Technology

    1994-09-01

    GENERAL GEOMETRY PIC SOFTWARE FOR DISTRIBUTED MEMORY MIMD COMPUTERS : TASK 1 FINAL REPORT J W Eastwood, W... GENERAL GEOMETRY PIC SOFTWARE FOR DISTRIBUTED MEMORY MIMD COMPUTERS : TASK 1 FINAL REPORT J W Eastwood, W Arter, N J Brealey, R W Hockney September 1994... General geometry PIC for MIMD computers : Final report . Report RFFX(93)56,

  9. Learning Geometry through Discovery Learning Using a Scientific Approach

    ERIC Educational Resources Information Center

    In'am, Akhsanul; Hajar, Siti

    2017-01-01

    The objective of this present research is to analyze the implementation of learning geometry through a scientific learning consisting of three aspects: 1) teacher's activities, 2) students' activities and, 3) the achievement results. The adopted approach is a descriptive-quantitative one and the subject is the Class VII students of Islamic Junior…

  10. Computational simulation of intracoronary flow based on real coronary geometry.

    PubMed

    Boutsianis, Evangelos; Dave, Hitendu; Frauenfelder, Thomas; Poulikakos, Dimos; Wildermuth, Simon; Turina, Marko; Ventikos, Yiannis; Zund, Gregor

    2004-08-01

    To assess the feasibility of computationally simulating intracoronary blood flow based on real coronary artery geometry and to graphically depict various mechanical characteristics of this flow. Explanted fresh pig hearts were fixed using a continuous perfusion of 4% formaldehyde at physiological pressures. Omnipaque dye added to lead rubber solution was titrated to an optimum proportion of 1:25, to cast the coronary arterial tree. The heart was stabilized in a phantom model so as to suspend the base and the apex without causing external deformation. High resolution computerized tomography scans of this model were utilized to reconstruct the three-dimensional coronary artery geometry, which in turn was used to generate several volumetric tetrahedral meshes of sufficient density needed for numerical accuracy. The transient equations of momentum and mass conservation were numerically solved by employing methods of computational fluid dynamics under realistic pulsatile inflow boundary conditions. The simulations have yielded graphic distributions of intracoronary flow stream lines, static pressure drop, wall shear stress, bifurcation mass flow ratios and velocity profiles. The variability of these quantities within the cardiac cycle has been investigated at a temporal resolution of 1/100th of a second and a spatial resolution of about 10 microm. The areas of amplified variations in wall shear stress, mostly evident in the neighborhoods of arterial branching, seem to correlate well with clinically observed increased atherogenesis. The intracoronary flow lines showed stasis and extreme vorticity during the phase of minimum coronary flow in contrast to streamlined undisturbed flow during the phase of maximum flow. Computational tools of this kind along with a state-of-the-art multislice computerized tomography or magnetic resonance-based non-invasive coronary imaging, could enable realistic, repetitive, non-invasive and multidimensional quantifications of the effects of

  11. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    SciTech Connect

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  12. Computation of recirculating compressible flow in axisymmetric geometries

    SciTech Connect

    Isaac, K.M.; Nejad, A.S.

    1985-01-01

    A computational study of compressible, turbulent, recirculating flow in axisymmetric geometries is reported in this paper. The SIMPLE algorithm was used in the differencing scheme and the k-epsilon model for turbulence was used for turbulence closure. Special attention was given to the specification of the boundary conditions. The study revealed the significant influence of the boundary conditions on the solution. The eddy length scale at the inlet to the solution domain was the most uncertain parameter in the specification of the boundary conditions. The predictions were compared with the recent data based on laser velocimetry. The two are seen to be in good agreement. The present study underscores the need to have a more reliable means of specifying the inlet boundary conditions for the k-epsilon turbulence model.

  13. Representing Range Compensators with Computational Geometry in TOPAS

    SciTech Connect

    Iandola, Forrest N.; /Illinois U., Urbana /SLAC

    2012-09-07

    In a proton therapy beamline, the range compensator modulates the beam energy, which subsequently controls the depth at which protons deposit energy. In this paper, we introduce two computational representations of range compensator. One of our compensator representations, which we refer to as a subtraction solid-based range compensator, precisely represents the compensator. Our other representation, the 3D hexagon-based range compensator, closely approximates the compensator geometry. We have implemented both of these compensator models in a proton therapy Monte Carlo simulation called TOPAS (Tool for Particle Simulation). In the future, we will present a detailed study of the accuracy and runtime performance trade-offs between our two range compensator representations.

  14. Geometric algebra and information geometry for quantum computational software

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo

    2017-03-01

    The art of quantum algorithm design is highly nontrivial. Grover's search algorithm constitutes a masterpiece of quantum computational software. In this article, we use methods of geometric algebra (GA) and information geometry (IG) to enhance the algebraic efficiency and the geometrical significance of the digital and analog representations of Grover's algorithm, respectively. Specifically, GA is used to describe the Grover iterate and the discretized iterative procedure that exploits quantum interference to amplify the probability amplitude of the target-state before measuring the query register. The transition from digital to analog descriptions occurs via Stone's theorem which relates the (unitary) Grover iterate to a suitable (Hermitian) Hamiltonian that controls Schrodinger's quantum mechanical evolution of a quantum state towards the target state. Once the discrete-to-continuos transition is completed, IG is used to interpret Grover's iterative procedure as a geodesic path on the manifold of the parametric density operators of pure quantum states constructed from the continuous approximation of the parametric quantum output state in Grover's algorithm. Finally, we discuss the dissipationless nature of quantum computing, recover the quadratic speedup relation, and identify the superfluity of the Walsh-Hadamard operation from an IG perspective with emphasis on statistical mechanical considerations.

  15. Computational Analysis on Stent Geometries in Carotid Artery: A Review

    NASA Astrophysics Data System (ADS)

    Paisal, Muhammad Sufyan Amir; Taib, Ishkrizat; Ismail, Al Emran

    2017-01-01

    This paper reviews the work done by previous researchers in order to gather the information for the current study which about the computational analysis on stent geometry in carotid artery. The implantation of stent in carotid artery has become popular treatment for arterial diseases of hypertension such as stenosis, thrombosis, atherosclerosis and embolization, in reducing the rate of mortality and morbidity. For the stenting of an artery, the previous researchers did many type of mathematical models in which, the physiological variables of artery is analogized to electrical variables. Thus, the computational fluid dynamics (CFD) of artery could be done, which this method is also did by previous researchers. It lead to the current study in finding the hemodynamic characteristics due to artery stenting such as wall shear stress (WSS) and wall shear stress gradient (WSSG). Another objective of this study is to evaluate the nowadays stent configuration for full optimization in reducing the arterial side effect such as restenosis rate after a few weeks of stenting. The evaluation of stent is based on the decrease of strut-strut intersection, decrease of strut width and increase of the strut-strut spacing. The existing configuration of stents are actually good enough in widening the narrowed arterial wall but the disease such as thrombosis still occurs in early and late stage after the stent implantation. Thus, the outcome of this study is the prediction for the reduction of restenosis rate and the WSS distribution is predicted to be able in classifying which stent configuration is the best.

  16. SU-E-I-12: Flexible Geometry Computed Tomography

    SciTech Connect

    Shaw, R

    2015-06-15

    Purpose: The concept separates the mechanical connection between the radiation source and detector. This design allows the trajectory and orientation of the radiation source/detector to be customized to the object that is being imaged. This is in contrast to the formulaic rotation-translation image acquisition of conventional computed tomography(CT).Background/significance:CT devices that image a full range of: anatomy, patient populations, and imaging procedures are large. The root cause of the expanding size of comprehensive CT is due to the commitment to helical geometry that is hardwired into the image reconstruction. FGCT extends the application of alternative reconstruction techniques, i.e. tomosynthesis, by separating the two main components— radiation source and detector— and allow for 6 degrees of freedom motion for radiation source, detector, or both. The image acquisition geometry is then tailored to how the patient/object is positioned. This provides greater flexibility on the position and location that the patient/object is being imaged. Additionally, removing the need of a rotating gantry reduces the footprint so that CT is more mobile and more available to move to where the patient/object is at, instead of the other way around. Methods: As proof-of-principle, a reconstruction algorithm is designed to produce FGCT images. Using simulated detector data, voxels intersecting a line drawn between the radiation source and an individual detector are traced and modified using the detector signal. The detector signal is modified to compensate for changes in the source to detector distance. Adjacent voxels are modified in proportion to the detector signal, providing a simple image filter. Results: Image-quality from the proposed FGCT reconstruction technique is proving to be a challenge, producing hardily recognizable images from limited projections angles. Conclusion: Preliminary assessment of the reconstruction technique demonstrates the inevitable

  17. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  18. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  19. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  20. SOLVING PDES IN COMPLEX GEOMETRIES: A DIFFUSE DOMAIN APPROACH

    PubMed Central

    LI, X.; LOWENGRUB, J.; RÄTZ, A.; VOIGT, A.

    2011-01-01

    We extend previous work and present a general approach for solving partial differential equations in complex, stationary, or moving geometries with Dirichlet, Neumann, and Robin boundary conditions. Using an implicit representation of the geometry through an auxilliary phase field function, which replaces the sharp boundary of the domain with a diffuse layer (e.g. diffuse domain), the equation is reformulated on a larger regular domain. The resulting partial differential equation is of the same order as the original equation, with additional lower order terms to approximate the boundary conditions. The reformulated equation can be solved by standard numerical techniques. We use the method of matched asymptotic expansions to show that solutions of the re-formulated equations converge to those of the original equations. We provide numerical simulations which confirm this analysis. We also present applications of the method to growing domains and complex three-dimensional structures and we discuss applications to cell biology and heteroepitaxy. PMID:21603084

  1. Using 3D Computer Graphics Multimedia to Motivate Preservice Teachers' Learning of Geometry and Pedagogy

    ERIC Educational Resources Information Center

    Goodson-Espy, Tracy; Lynch-Davis, Kathleen; Schram, Pamela; Quickenton, Art

    2010-01-01

    This paper describes the genesis and purpose of our geometry methods course, focusing on a geometry-teaching technology we created using NVIDIA[R] Chameleon demonstration. This article presents examples from a sequence of lessons centered about a 3D computer graphics demonstration of the chameleon and its geometry. In addition, we present data…

  2. Automated Preparation of Geometry for Computational Applications Final Report

    DTIC Science & Technology

    2011-01-31

    the GPW exports the CAD geometry to commonly used grid generation tools such as Chimera Grid Tools, Cart3D , and SolidMesh. Export in STL format is...exports the CAD geometry to commonly used grid generation tools such as Chimera Grid Tools and Cart3D and SolidMesh. Export in STL format is also

  3. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) techniques were applied to the Launch Abort System (LAS) of the NASA Crew Exploration Vehicle (CEV) parametric geometry Computational Fluid Dynamics (CFD) study to efficiently identify and rank the primary contributors to the integrated drag over the vehicles ascent trajectory. Typical approaches to these types of activities involve developing all possible combinations of geometries changing one variable at a time, analyzing them with CFD, and predicting the main effects on an aerodynamic parameter, which in this application is integrated drag. The original plan for the LAS study team was to generate and analyze more than1000 geometry configurations to study 7 geometric parameters. By utilizing DOE techniques the number of geometries was strategically reduced to 84. In addition, critical information on interaction effects among the geometric factors were identified that would not have been possible with the traditional technique. Therefore, the study was performed in less time and provided more information on the geometric main effects and interactions impacting drag generated by the LAS. This paper discusses the methods utilized to develop the experimental design, execution, and data analysis.

  4. Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans

    DTIC Science & Technology

    2017-07-13

    ARL-MR-0957 ● JULY 2017 US Army Research Laboratory Development and Verification of Body Armor Target Geometry Created Using...Army Research Laboratory Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans by Autumn R Kulaga... Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  5. A Geometry Based Infra-Structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    ). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the

  6. Evaluation of a Cone Beam Computed Tomography Geometry for Image Guided Small Animal Irradiation

    PubMed Central

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-01-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (“tubular” geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (“pancake” geometry). The small animal radiation research platform (SARRP) developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Notwithstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e., pancake and tubular geometry

  7. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation

    NASA Astrophysics Data System (ADS)

    Yang, Yidong; Armour, Michael; Kang-Hsin Wang, Ken; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-01

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal (‘tubular’ geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal (‘pancake’ geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry

  8. Evaluation of a cone beam computed tomography geometry for image guided small animal irradiation.

    PubMed

    Yang, Yidong; Armour, Michael; Wang, Ken Kang-Hsin; Gandhi, Nishant; Iordachita, Iulian; Siewerdsen, Jeffrey; Wong, John

    2015-07-07

    The conventional imaging geometry for small animal cone beam computed tomography (CBCT) is that a detector panel rotates around the head-to-tail axis of an imaged animal ('tubular' geometry). Another unusual but possible imaging geometry is that the detector panel rotates around the anterior-to-posterior axis of the animal ('pancake' geometry). The small animal radiation research platform developed at Johns Hopkins University employs the pancake geometry where a prone-positioned animal is rotated horizontally between an x-ray source and detector panel. This study is to assess the CBCT image quality in the pancake geometry and investigate potential methods for improvement. We compared CBCT images acquired in the pancake geometry with those acquired in the tubular geometry when the phantom/animal was placed upright simulating the conventional CBCT geometry. Results showed signal-to-noise and contrast-to-noise ratios in the pancake geometry were reduced in comparison to the tubular geometry at the same dose level. But the overall spatial resolution within the transverse plane of the imaged cylinder/animal was better in the pancake geometry. A modest exposure increase to two folds in the pancake geometry can improve image quality to a level close to the tubular geometry. Image quality can also be improved by inclining the animal, which reduces streak artifacts caused by bony structures. The major factor resulting in the inferior image quality in the pancake geometry is the elevated beam attenuation along the long axis of the phantom/animal and consequently increased scatter-to-primary ratio in that orientation. Not withstanding, the image quality in the pancake-geometry CBCT is adequate to support image guided animal positioning, while providing unique advantages of non-coplanar and multiple mice irradiation. This study also provides useful knowledge about the image quality in the two very different imaging geometries, i.e. pancake and tubular geometry, respectively.

  9. Automatic computation of pebble roundness using digital imagery and discrete geometry

    NASA Astrophysics Data System (ADS)

    Roussillon, Tristan; Piégay, Hervé; Sivignon, Isabelle; Tougne, Laure; Lavigne, Franck

    2009-10-01

    The shape of sedimentary particles is an important property, from which geographical hypotheses related to abrasion, distance of transport, river behavior, etc. can be formulated. In this paper, we use digital image analysis, especially discrete geometry, to automatically compute some shape parameters such as roundness, i.e. a measure of how much the corners and edges of a particle have been worn away. In contrast to previous work in which traditional digital images analysis techniques, such as Fourier transform, are used, we opted for a discrete geometry approach that allowed us to implement Wadell's original index, which is known to be more accurate, but more time consuming to implement in the field. Our implementation of Wadell's original index is highly correlated (92%) with the roundness classes of Krumbein's chart, used as a ground-truth. In addition, we show that other geometrical parameters, which are easier to compute, can be used to provide good approximations of roundness. We also used our shape parameters to study a set of pebbles digital images taken from the Progo basin river network (Indonesia). The results we obtained are in agreement with previous work and open new possibilities for geomorphologists thanks to automatic computation.

  10. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    DTIC Science & Technology

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  11. Experimental demonstration of novel imaging geometries for x-ray fluorescence computed tomography

    PubMed Central

    Fu, Geng; Meng, Ling-Jian; Eng, Peter; Newville, Matt; Vargas, Phillip; Riviere, Patrick La

    2013-01-01

    Purpose: X-ray fluorescence computed tomography (XFCT) is an emerging imaging modality that maps the three-dimensional distribution of elements, generally metals, in ex vivo specimens and potentially in living animals and humans. At present, it is generally performed at synchrotrons, taking advantage of the high flux of monochromatic x rays, but recent work has demonstrated the feasibility of using laboratory-based x-ray tube sources. In this paper, the authors report the development and experimental implementation of two novel imaging geometries for mapping of trace metals in biological samples with ∼50–500 μm spatial resolution. Methods: One of the new imaging approaches involves illuminating and scanning a single slice of the object and imaging each slice's x-ray fluorescent emissions using a position-sensitive detector and a pinhole collimator. The other involves illuminating a single line through the object and imaging the emissions using a position-sensitive detector and a slit collimator. They have implemented both of these using synchrotron radiation at the Advanced Photon Source. Results: The authors show that it is possible to achieve 250 eV energy resolution using an electron multiplying CCD operating in a quasiphoton-counting mode. Doing so allowed them to generate elemental images using both of the novel geometries for imaging of phantoms and, for the second geometry, an osmium-stained zebrafish. Conclusions: The authors have demonstrated the feasibility of these two novel approaches to XFCT imaging. While they use synchrotron radiation in this demonstration, the geometries could readily be translated to laboratory systems based on tube sources. PMID:23718594

  12. Interplay of spatial aggregation and computational geometry in extracting diagnostic features from cardiac activation data.

    PubMed

    Ironi, Liliana; Tentoni, Stefania

    2012-09-01

    Functional imaging plays an important role in the assessment of organ functions, as it provides methods to represent the spatial behavior of diagnostically relevant variables within reference anatomical frameworks. The salient physical events that underly a functional image can be unveiled by appropriate feature extraction methods capable to exploit domain-specific knowledge and spatial relations at multiple abstraction levels and scales. In this work we focus on general feature extraction methods that can be applied to cardiac activation maps, a class of functional images that embed spatio-temporal information about the wavefront propagation. The described approach integrates a qualitative spatial reasoning methodology with techniques borrowed from computational geometry to provide a computational framework for the automated extraction of basic features of the activation wavefront kinematics and specific sets of diagnostic features that identify an important class of rhythm pathologies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Kinematics and computation of workspace for adaptive geometry structures

    NASA Astrophysics Data System (ADS)

    Pourki, Forouza; Sosa, Horacio

    1993-09-01

    A new feature in the design of smart structures is the capability of the structure to respond autonomously to undesirable phenomena and environment. This capability is often synonymous to the requirement that the structure should assume a set of different geometric shapes or adapt to a set of kinematic constraints to accomplish a maneuver. Systems with these characteristics have been referred to as `shape adaptive' or `variable geometry' structures. The present paper introduces a basis for the kinematics and work space studies of statically deterministic truss structures which are shape adaptive. The difference between these structures and the traditional truss structures, which are merely built to support the weight and may be modelled by finite element methods, is the fact that these variable geometry structures allow for large (and nonlinear) deformations. On the other hand, these structures unlike structures composed of well investigated `four bar mechanisms,' are statically deterministic.

  14. Computational Approaches to Interface Design

    NASA Technical Reports Server (NTRS)

    Corker; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    Tools which make use of computational processes - mathematical, algorithmic and/or knowledge-based - to perform portions of the design, evaluation and/or construction of interfaces have become increasingly available and powerful. Nevertheless, there is little agreement as to the appropriate role for a computational tool to play in the interface design process. Current tools fall into broad classes depending on which portions, and how much, of the design process they automate. The purpose of this panel is to review and generalize about computational approaches developed to date, discuss the tasks which for which they are suited, and suggest methods to enhance their utility and acceptance. Panel participants represent a wide diversity of application domains and methodologies. This should provide for lively discussion about implementation approaches, accuracy of design decisions, acceptability of representational tradeoffs and the optimal role for a computational tool to play in the interface design process.

  15. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  16. Ideal spiral bevel gears: A new approach to surface geometry

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Coy, J. J.

    1980-01-01

    The fundamental geometrical characteristics of spiral bevel gear tooth surfaces are discussed. The parametric representation of an ideal spiral bevel tooth is developed based on the elements of involute geometry, differential geometry, and fundamental gearing kinematics. A foundation is provided for the study of nonideal gears and the effects of deviations from ideal geometry on the contact stresses, lubrication, wear, fatigue life, and gearing kinematics.

  17. Ideal spiral bevel gears - A new approach to surface geometry

    NASA Technical Reports Server (NTRS)

    Huston, R. L.; Coy, J. J.

    1980-01-01

    This paper discusses the fundamental geometrical characteristics of spiral bevel gear tooth surfaces. The parametric representation of an ideal spiral bevel tooth is developed. The development is based on the elements of involute geometry, differential geometry, and fundamental gearing kinematics. A foundation is provided for the study of nonideal gears and the effects of deviations from ideal geometry on the contact stresses, lubrication, wear, fatigue life, and gearing kinematics.

  18. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson's Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  19. Transport Equation Based Wall Distance Computations Aimed at Flows With Time-Dependent Geometry

    NASA Technical Reports Server (NTRS)

    Tucker, Paul G.; Rumsey, Christopher L.; Bartels, Robert E.; Biedron, Robert T.

    2003-01-01

    Eikonal, Hamilton-Jacobi and Poisson equations can be used for economical nearest wall distance computation and modification. Economical computations may be especially useful for aeroelastic and adaptive grid problems for which the grid deforms, and the nearest wall distance needs to be repeatedly computed. Modifications are directed at remedying turbulence model defects. For complex grid structures, implementation of the Eikonal and Hamilton-Jacobi approaches is not straightforward. This prohibits their use in industrial CFD solvers. However, both the Eikonal and Hamilton-Jacobi equations can be written in advection and advection-diffusion forms, respectively. These, like the Poisson s Laplacian, are commonly occurring industrial CFD solver elements. Use of the NASA CFL3D code to solve the Eikonal and Hamilton-Jacobi equations in advective-based forms is explored. The advection-based distance equations are found to have robust convergence. Geometries studied include single and two element airfoils, wing body and double delta configurations along with a complex electronics system. It is shown that for Eikonal accuracy, upwind metric differences are required. The Poisson approach is found effective and, since it does not require offset metric evaluations, easiest to implement. The sensitivity of flow solutions to wall distance assumptions is explored. Generally, results are not greatly affected by wall distance traits.

  20. Molecular tailoring approach for geometry optimization of large molecules: energy evaluation and parallelization strategies.

    PubMed

    Ganesh, V; Dongare, Rameshwar K; Balanarayan, P; Gadre, Shridhar R

    2006-09-14

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including alpha-tocopherol, taxol, gamma-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  1. Molecular tailoring approach for geometry optimization of large molecules: Energy evaluation and parallelization strategies

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Dongare, Rameshwar K.; Balanarayan, P.; Gadre, Shridhar R.

    2006-09-01

    A linear-scaling scheme for estimating the electronic energy, gradients, and Hessian of a large molecule at ab initio level of theory based on fragment set cardinality is presented. With this proposition, a general, cardinality-guided molecular tailoring approach (CG-MTA) for ab initio geometry optimization of large molecules is implemented. The method employs energy gradients extracted from fragment wave functions, enabling computations otherwise impractical on PC hardware. Further, the method is readily amenable to large scale coarse-grain parallelization with minimal communication among nodes, resulting in a near-linear speedup. CG-MTA is applied for density-functional-theory-based geometry optimization of a variety of molecules including α-tocopherol, taxol, γ-cyclodextrin, and two conformations of polyglycine. In the tests performed, energy and gradient estimates obtained from CG-MTA during optimization runs show an excellent agreement with those obtained from actual computation. Accuracy of the Hessian obtained employing CG-MTA provides good hope for the application of Hessian-based geometry optimization to large molecules.

  2. Analytical, computational and experimental studies of capillary flow in complex geometries

    NASA Astrophysics Data System (ADS)

    Peng, Yongqing

    The dynamic processes of capillary flow in complex geometries have been studied analytically, computationally and experimentally in this research. A general approach for modeling the capillary flow in arbitrary irregular geometries with straight axis of symmetry is proposed. Using this approach, the governing equation to describe the dynamic capillary rising motion in capillaries with nonuniform elliptical cross-section is first derived under the assumptions of parabolic distribution of the axial velocity and constant contact angle. The calculation results for the capillary flow in different tubes with irregular wall show that, in comparison with existing models that have been tested, the present model can improve the underestimation of the nonuniformity effects. Using the perturbation method, an asymptotic solution of the flow field in nonuniform circular tubes is obtained and is shown to be superior to the traditional Hagen-Poisuille solutions in comparison to the numerical FLUENT results. A new DCA (dynamic contact angle) model, combining the current velocity-dependent model based on molecular-kinetic theory and empirical time-dependent model based on experiments, is proposed to describe the dynamic transition process of the gas-liquid interface. The applicable scope of the new DCA model is extended to the entire process from the initial state to the equilibrium state. The capillary flow model is further developed by using the new velocity distribution and the DCA model. The proposed theoretical models are validated by a series of experiments of capillary flow in complex geometries. The industrial application of the research was explored by adopting the proposed model to describe the water flow through a multi-layer porous medium that is used in Procter & Gamble's dewatering device for the paper making industry. Comparing with the experimental data, the proposed model has good predictions on the dewatering performance of the device, and hence, can potentially be

  3. PERTURBATION APPROACH FOR QUANTUM COMPUTATION

    SciTech Connect

    G. P. BERMAN; D. I. KAMENEV; V. I. TSIFRINOVICH

    2001-04-01

    We discuss how to simulate errors in the implementation of simple quantum logic operations in a nuclear spin quantum computer with many qubits, using radio-frequency pulses. We verify our perturbation approach using the exact solutions for relatively small (L = 10) number of qubits.

  4. DMG-α--a computational geometry library for multimolecular systems.

    PubMed

    Szczelina, Robert; Murzyn, Krzysztof

    2014-11-24

    The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.

  5. High performance parallel computing of flows in complex geometries

    NASA Astrophysics Data System (ADS)

    Gicquel, Laurent Y. M.; Gourdain, N.; Boussuge, J.-F.; Deniau, H.; Staffelbach, G.; Wolf, P.; Poinsot, Thierry

    2011-02-01

    Efficient numerical tools taking advantage of the ever increasing power of high-performance computers, become key elements in the fields of energy supply and transportation, not only from a purely scientific point of view, but also at the design stage in industry. Indeed, flow phenomena that occur in or around the industrial applications such as gas turbines or aircraft are still not mastered. In fact, most Computational Fluid Dynamics (CFD) predictions produced today focus on reduced or simplified versions of the real systems and are usually solved with a steady state assumption. This article shows how recent developments of CFD codes and parallel computer architectures can help overcoming this barrier. With this new environment, new scientific and technological challenges can be addressed provided that thousands of computing cores are efficiently used in parallel. Strategies of modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. These concepts are used in two CFD codes developed by CERFACS: a multi-block structured code dedicated to aircrafts and turbo-machinery as well as an unstructured code for gas turbine flow predictions. Leading edge computations obtained with these high-end massively parallel CFD codes are illustrated and discussed in the context of aircrafts, turbo-machinery and gas turbine applications. Finally, future developments of CFD and high-end computers are proposed to provide leading edge tools and end applications with strong industrial implications at the design stage of the next generation of aircraft and gas turbines.

  6. Comparison of micro-computed tomography and laser scanning for reverse engineering orthopaedic component geometries.

    PubMed

    Teeter, Matthew G; Brophy, Paul; Naudie, Douglas D R; Holdsworth, David W

    2012-03-01

    A significant amount of research has been undertaken to evaluate the function of implanted joint replacement components. Many of these studies require the acquisition of an accurate three-dimensional geometric model of the various implant components, using methods such as micro-computed tomography or laser scanning. The purpose of this study was to compare micro-computed tomography and laser scanning for obtaining component geometries. Five never-implanted polyethylene tibial inserts of one type were scanned with both micro-computed tomography and laser scanning to determine the repeatability of each method and measured for any deviations between the geometries acquired from the different scans. Overall, good agreement was found between the micro-computed tomography and laser scans, to within 71 microm on average. Micro-computed tomography was found to have superior repeatability to laser scanning (mean of 1 microm for micro-computed tomography versus 19 microm for laser scans). Micro-computed tomography may be preferred for visualizing small surface features, whereas laser scanning may be preferred for acquiring the geometry of metal objects to avoid computed tomography artifacts. In conclusion, the choice of micro-computed tomography versus laser scanning for acquiring orthopaedic component geometries will likely involve considerations of user preference, the specific application the scan will be used for, and the availability of each system.

  7. Computer-Generated Geometry Instruction: A Preliminary Study

    ERIC Educational Resources Information Center

    Kang, Helen W.; Zentall, Sydney S.

    2011-01-01

    This study hypothesized that increased intensity of graphic information, presented in computer-generated instruction, could be differentially beneficial for students with hyperactivity and inattention by improving their ability to sustain attention and hold information in-mind. To this purpose, 18 2nd-4th grade students, recruited from general…

  8. Computer-Generated Geometry Instruction: A Preliminary Study

    ERIC Educational Resources Information Center

    Kang, Helen W.; Zentall, Sydney S.

    2011-01-01

    This study hypothesized that increased intensity of graphic information, presented in computer-generated instruction, could be differentially beneficial for students with hyperactivity and inattention by improving their ability to sustain attention and hold information in-mind. To this purpose, 18 2nd-4th grade students, recruited from general…

  9. Tensor methodology and computational geometry in direct computational experiments in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia

    2017-07-01

    The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.

  10. A functional approach to geometry optimization of complex systems

    NASA Astrophysics Data System (ADS)

    Maslen, P. E.

    A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.

  11. MHRDRing Z-Pinches and Related Geometries: Four Decades of Computational Modeling Using Still Unconventional Methods

    SciTech Connect

    Lindemuth, Irvin R.

    2009-01-21

    For approximately four decades, Z-pinches and related geometries have been computationally modeled using unique Alternating Direction Implicit (ADI) numerical methods. Computational results have provided illuminating and often provocative interpretations of experimental results. A number of past and continuing applications are reviewed and discussed.

  12. Application of Computer Axial Tomography (CAT) to measuring crop canopy geometry. [corn and soybeans

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Vanderbilt, V. C. (Principal Investigator); Kilgore, R. W.

    1981-01-01

    The feasibility of using the principles of computer axial topography (CAT) to quantify the structure of crop canopies was investigated because six variables are needed to describe the position-orientation with time of a small piece of canopy foliage. Several cross sections were cut through the foliage of healthy, green corn and soybean canopies in the dent and full pod development stages, respectively. A photograph of each cross section representing the intersection of a plane with the foliage was enlarged and the air-foliage boundaries delineated by the plane were digitized. A computer program was written and used to reconstruct the cross section of the canopy. The approach used in applying optical computer axial tomography to measuring crop canopy geometry shows promise of being able to provide needed geometric information for input data to canopy reflectance models. The difficulty of using the CAT scanner to measure large canopies of crops like corn is discussed and a solution is proposed involving the measurement of plants one at a time.

  13. Potts models with magnetic field: Arithmetic, geometry, and computation

    NASA Astrophysics Data System (ADS)

    Dasu, Shival; Marcolli, Matilde

    2015-11-01

    We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.

  14. Computational approaches for drug discovery.

    PubMed

    Hung, Che-Lun; Chen, Chi-Chun

    2014-09-01

    Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs. © 2014 Wiley Periodicals, Inc.

  15. Virtual photons in imaginary time: Computing Casimir forces in new geometries

    NASA Astrophysics Data System (ADS)

    Johnson, Steven G.

    2009-03-01

    One of the most dramatic manifestations of the quantum nature of light in the past half-century has been the Casimir force: a force between neutral objects at close separations caused by quantum vacuum fluctuations in the electromagnetic fields. In classical photonics, wavelength-scale structures can be designed to dramatically alter the behavior of light, so it is natural to consider whether analogous geometry-based effects occur for Casimir forces. However, this problem turns out to be surprisingly difficult for all but the simplest planar geometries. (The deceptively simple case of an infinite plate and infinite cylinder, for perfect metals, was first solved in 2006.) Many formulations of the Casimir force, indeed, correspond to impossibly hard numerical problems. We will describe how the availability of large-scale computing resources in NSF's Teragrid, combined with reformulations of the Casimir-force problem oriented towards numerical computation, are enabling the exploration of Casimir forces in new regimes of geometry and materials.

  16. Slant Path Distances Through Cells in Cylindrical Geometry and an Application to the Computation of Isophotes

    SciTech Connect

    Rodney Whitaker Eugene Symbalisty

    2007-12-17

    In computer programs involving two-dimensional cylindrical geometry, it is often necessary to calculate the slant path distance in a given direction from a point to the boundary of a mesh cell. A subroutine, HOWFAR, has been written that accomplishes this, and is very economical in computer time. An example of its use is given in constructing the isophotes for a low altitude nuclear fireball.

  17. Identifying Critical Learner Traits in a Dynamic Computer-Based Geometry Program.

    ERIC Educational Resources Information Center

    Hannafin, Robert D.; Scott, Barry N.

    1998-01-01

    Investigated the effects of student working-memory capacity, preference for amount of instruction, spatial problem-solving ability, and school mathematics grades on eighth graders' recall of factual information and conceptual understanding. Pairs of students worked through 16 activities using a dynamic, computer-based geometry program. Presents…

  18. Comparative Effects of Two Modes of Computer-Assisted Instructional Package on Solid Geometry Achievement

    ERIC Educational Resources Information Center

    Gambari, Isiaka Amosa; Ezenwa, Victoria Ifeoma; Anyanwu, Romanus Chogozie

    2014-01-01

    The study examined the effects of two modes of computer-assisted instructional package on solid geometry achievement amongst senior secondary school students in Minna, Niger State, Nigeria. Also, the influence of gender on the performance of students exposed to CAI(AT) and CAI(AN) packages were examined. This study adopted a pretest-posttest…

  19. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  20. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. The flux-coordinate independent approach applied to X-point geometries

    SciTech Connect

    Hariri, F. Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-08-15

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries.

  2. Computational geometry for patient-specific reconstruction and meshing of blood vessels from MR and CT angiography.

    PubMed

    Antiga, Luca; Ene-Iordache, Bogdan; Remuzzi, Andrea

    2003-05-01

    Investigation of three-dimensional (3-D) geometry and fluid-dynamics in human arteries is an important issue in vascular disease characterization and assessment. Thanks to recent advances in magnetic resonance (MR) and computed tomography (CT), it is now possible to address the problem of patient-specific modeling of blood vessels, in order to take into account interindividual anatomic variability of vasculature. Generation of models suitable for computational fluid dynamics is still commonly performed by semiautomatic procedures, in general based on operator-dependent tasks, which cannot be easily extended to a significant number of clinical cases. In this paper, we overcome these limitations making use of computational geometry techniques. In particular, 3-D modeling was carried out by means of 3-D level sets approach. Model editing was also implemented ensuring harmonic mean curvature vectors distribution on the surface, and model geometric analysis was performed with a novel approach, based on solving Eikonal equation on Voronoi diagram. This approach provides calculation of central paths, maximum inscribed sphere estimation and geometric characterization of the surface. Generation of adaptive-thickness boundary layer finite elements is finally presented. The use of the techniques presented here makes it possible to introduce patient-specific modeling of blood vessels at clinical level.

  3. Fuzzy multiple linear regression: A computational approach

    NASA Technical Reports Server (NTRS)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  4. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.

    2015-02-01

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  5. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  6. The influence of the patient size and geometry on cone beam-computed tomography hounsfield unit.

    PubMed

    Ping, Heng Siew; Kandaiya, Sivamany

    2012-07-01

    The objective of this work is to study the influence of the patient size and geometry on CBCT Hounsfield Unit and the accuracy of calibration Hounsfield Unit to electron density (HU-ED) using patient specific HU-ED mapping method for dose calculation. Two clinical cases, namely nasopharyngeal carcinoma (NPC) case and prostate case for 4 patients with different size and geometry were enrolled to assess the impact of size and geometry on CBCT Hounsfield Unit. The accuracy of the patient specific HU-ED mapping method was validated by comparing dose distributions based on planning CT and CBCT, dose-volume based indices and the digitally reconstructed radiograph (DRR) by analyzing their line profile plots. Significant differences in Hounsfield unit and line profile plots were found for NPC and prostate cases. The doses computed based on planning CT data sets and CBCT datasets for both clinical cases agree to within 1% for planning target volumes and 3% for organs at risk. The data shows that there are high dependence of HU on patient size and geometry; thus, the use of one CBCT HU-ED calibration curve made of one size and geometry will not be accurate for use with a patient of different size and geometry.

  7. The influence of the patient size and geometry on cone beam-computed tomography hounsfield unit

    PubMed Central

    Ping, Heng Siew; Kandaiya, Sivamany

    2012-01-01

    The objective of this work is to study the influence of the patient size and geometry on CBCT Hounsfield Unit and the accuracy of calibration Hounsfield Unit to electron density (HU-ED) using patient specific HU-ED mapping method for dose calculation. Two clinical cases, namely nasopharyngeal carcinoma (NPC) case and prostate case for 4 patients with different size and geometry were enrolled to assess the impact of size and geometry on CBCT Hounsfield Unit. The accuracy of the patient specific HU-ED mapping method was validated by comparing dose distributions based on planning CT and CBCT, dose-volume based indices and the digitally reconstructed radiograph (DRR) by analyzing their line profile plots. Significant differences in Hounsfield unit and line profile plots were found for NPC and prostate cases. The doses computed based on planning CT data sets and CBCT datasets for both clinical cases agree to within 1% for planning target volumes and 3% for organs at risk. The data shows that there are high dependence of HU on patient size and geometry; thus, the use of one CBCT HU-ED calibration curve made of one size and geometry will not be accurate for use with a patient of different size and geometry. PMID:22973083

  8. Computational study of Coanda based Fluidic Thrust Vectoring system for optimising Coanda geometry

    NASA Astrophysics Data System (ADS)

    Bharathwaj, R.; Giridharan, P.; Karthick, K.; Prasath, C. Hari; Marimuthu, K. Prakash

    2016-09-01

    This paper presents the study which was intended to identify the optimum geometries for different operating conditions of a Coanda bases Fluidic Thrust vectoring using ACHEON nozzle (Aerial Coanda High Efficiency Orienting Nozzle). This computational study was done utilising the software ANSYS. The study is aimed at optimising the Coanda surface by identifying the most appropriate geometry of the Coanda surface for different operating conditions. The radius of curvature of the coanda surface has been modified for different studies and also different levels of truncation of the coanda surface has also been done. In this study a variation in the radius is from 52.73 mm to 62.67mm.

  9. Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach

    DTIC Science & Technology

    2013-12-11

    Geometry of Dynamic Large Networks - A Scaling and Renormalization Group Approach IRAJ SANIEE LUCENT TECHNOLOGIES INC 12/11/2013 Final Report...Z39.18 Final Performance Report Grant Title: Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach Grant Award Number...prototypical synthetic and real -life large graphs, congestion metrics and analytical estimation of these metrics for such graphs. In our earlier

  10. Multivariate geometry as an approach to algal community analysis

    USGS Publications Warehouse

    Allen, T.F.H.; Skagen, S.

    1973-01-01

    Multivariate analyses are put in the context of more usual approaches to phycological investigations. The intuitive common-sense involved in methods of ordination, classification and discrimination are emphasised by simple geometric accounts which avoid jargon and matrix algebra. Warnings are given that artifacts result from technique abuses by the naive or over-enthusiastic. An analysis of a simple periphyton data set is presented as an example of the approach. Suggestions are made as to situations in phycological investigations, where the techniques could be appropriate. The discipline is reprimanded for its neglect of the multivariate approach.

  11. An algebraic geometry approach to protein structure determination from NMR data.

    PubMed

    Wang, Lincong; Mettu, Ramgopal R; Donald, Bruce Randall

    2005-01-01

    Our paper describes the first provably-efficient algorithm for determining protein structures de novo, solely from experimental data. We show how the global nature of a certain kind of NMR data provides quantifiable complexity-theoretic benefits, allowing us to classify our algorithm as running in polynomial time. While our algorithm uses NMR data as input, it is the first polynomial-time algorithm to compute high-resolution structures de novo using any experimentally-recorded data, from either NMR spectroscopy or X-Ray crystallography. Improved algorithms for protein structure determination are needed, because currently, the process is expensive and time-consuming. For example, an area of intense research in NMR methodology is automated assignment of nuclear Overhauser effect (NOE) restraints, in which structure determination sits in a tight inner-loop (cycle) of assignment/refinement. These algorithms are very time-consuming, and typically require a large cluster. Thus, algorithms for protein structure determination that are known to run in polynomial time and provide guarantees on solution accuracy are likely to have great impact in the long-term. Methods stemming from a technique called "distance geometry embedding" do come with provable guarantees, but the NP-hardness of these problem formulations implies that in the worst case these techniques cannot run in polynomial time. We are able to avoid the NP-hardness by (a) some mild assumptions about the protein being studied, (b) the use of residual dipolar couplings (RDCs) instead of a dense network of NOEs, and (c) novel algorithms and proofs that exploit the biophysical geometry of (a) and (b), drawing on a variety of computer science, computational geometry, and computational algebra techniques. In our algorithm, RDC data, which gives global restraints on the orientation of internuclear bond vectors, is used in conjunction with very sparse NOE data to obtain a polynomial-time algorithm for protein structure

  12. Comparative study of auxetic geometries by means of computer-aided design and engineering

    NASA Astrophysics Data System (ADS)

    Álvarez Elipe, Juan Carlos; Díaz Lantada, Andrés

    2012-10-01

    Auxetic materials (or metamaterials) are those with a negative Poisson ratio (NPR) and display the unexpected property of lateral expansion when stretched, as well as an equal and opposing densification when compressed. Such geometries are being progressively employed in the development of novel products, especially in the fields of intelligent expandable actuators, shape morphing structures and minimally invasive implantable devices. Although several auxetic and potentially auxetic geometries have been summarized in previous reviews and research, precise information regarding relevant properties for design tasks is not always provided. In this study we present a comparative study of two-dimensional and three-dimensional auxetic geometries carried out by means of computer-aided design and engineering tools (from now on CAD-CAE). The first part of the study is focused on the development of a CAD library of auxetics. Once the library is developed we simulate the behavior of the different auxetic geometries and elaborate a systematic comparison, considering relevant properties of these geometries, such as Poisson ratio(s), maximum volume or area reductions attainable and equivalent Young’s modulus, hoping it may provide useful information for future designs of devices based on these interesting structures.

  13. Computation of Transverse Injection Into Supersonic Crossflow With Various Injector Orifice Geometries

    NASA Technical Reports Server (NTRS)

    Foster, Lancert; Engblom, William A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of various injector geometries employed in transverse injection into a non-reacting Mach 1.2 flow. 3-D Reynolds-Averaged Navier Stokes (RANS) results are obtained for the various injector geometries using the Wind code with the Mentor s Shear Stress Transport turbulence model in both single and multi-species modes. Computed results for the injector mixing, penetration, and induced wall forces are presented. In the case of rectangular injectors, those longer in the direction of the freestream flow are predicted to generate the most mixing and penetration of the injector flow into the primary stream. These injectors are also predicted to provide the largest discharge coefficients and induced wall forces. Minor performance differences are indicated among diamond, circle, and square orifices. Grid sensitivity study results are presented which indicate consistent qualitative trends in the injector performance comparisons with increasing grid fineness.

  14. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  15. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  16. New method for computing ideal MHD normal modes in axisymmetric toroidal geometry

    SciTech Connect

    Wysocki, F.; Grimm, R.C.

    1984-11-01

    Analytic elimination of the two magnetic surface components of the displacement vector permits the normal mode ideal MHD equations to be reduced to a scalar form. A Galerkin procedure, similar to that used in the PEST codes, is implemented to determine the normal modes computationally. The method retains the efficient stability capabilities of the PEST 2 energy principle code, while allowing computation of the normal mode frequencies and eigenfunctions, if desired. The procedure is illustrated by comparison with earlier various of PEST and by application to tilting modes in spheromaks, and to stable discrete Alfven waves in tokamak geometry.

  17. Computer Algebra, Instrumentation and the Anthropological Approach

    ERIC Educational Resources Information Center

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  18. A new approach for turbulent simulations in complex geometries

    NASA Astrophysics Data System (ADS)

    Israel, Daniel M.

    Historically turbulence modeling has been sharply divided into Reynolds averaged Navier-Stokes (RANS), in which all the turbulent scales of motion are modeled, and large-eddy simulation (LES), in which only a portion of the turbulent spectrum is modeled. In recent years there have been numerous attempts to couple these two approaches either by patching RANS and LES calculations together (zonal methods) or by blending the two sets of equations. In order to create a proper bridging model, that is, a single set of equations which captures both RANS and LES like behavior, it is necessary to place both RANS and LES in a more general framework. The goal of the current work is threefold: to provide such a framework, to demonstrate how the Flow Simulation Methodology (FSM) fits into this framework, and to evaluate the strengths and weaknesses of the current version of the FSM. To do this, first a set of filtered Navier-Stokes (FNS) equations are introduced in terms of an arbitrary generalized filter. Additional exact equations are given for the second order moments and the generalized subfilter dissipation rate tensor. This is followed by a discussion of the role of implicit and explicit filters in turbulence modeling. The FSM is then described with particular attention to its role as a bridging model. In order to evaluate the method a specific implementation of the FSM approach is proposed. Simulations are presented using this model for the case of a separating flow over a "hump" with and without flow control. Careful attention is paid to error estimation, and, in particular, how using flow statistics and time series affects the error analysis. Both mean flow and Reynolds stress profiles are presented, as well as the phase averaged turbulent structures and wall pressure spectra. Using the phase averaged data it is possible to examine how the FSM partitions the energy between the coherent resolved scale motions, the random resolved scale fluctuations, and the subfilter

  19. Computer-Based Training: An Institutional Approach.

    ERIC Educational Resources Information Center

    Barker, Philip; Manji, Karim

    1992-01-01

    Discussion of issues related to computer-assisted learning (CAL) and computer-based training (CBT) describes approaches to electronic learning; principles underlying courseware development to support these approaches; and a plan for creation of a CAL/CBT development center, including its functional role, campus services, staffing, and equipment…

  20. A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2001-01-01

    This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.

  1. Modelling Mathematics Teachers' Intention to Use the Dynamic Geometry Environments in Macau: An SEM Approach

    ERIC Educational Resources Information Center

    Zhou, Mingming; Chan, Kan Kan; Teo, Timothy

    2016-01-01

    Dynamic geometry environments (DGEs) provide computer-based environments to construct and manipulate geometric figures with great ease. Research has shown that DGEs has positive impact on student motivation, engagement, and achievement in mathematics learning. However, the adoption of DGEs by mathematics teachers varies substantially worldwide.…

  2. Modelling Mathematics Teachers' Intention to Use the Dynamic Geometry Environments in Macau: An SEM Approach

    ERIC Educational Resources Information Center

    Zhou, Mingming; Chan, Kan Kan; Teo, Timothy

    2016-01-01

    Dynamic geometry environments (DGEs) provide computer-based environments to construct and manipulate geometric figures with great ease. Research has shown that DGEs has positive impact on student motivation, engagement, and achievement in mathematics learning. However, the adoption of DGEs by mathematics teachers varies substantially worldwide.…

  3. Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations Around Iced Airfoils and Wings

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)

    2002-01-01

    Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.

  4. Hydrodynamic optimization of membrane bioreactor by horizontal geometry modification using computational fluid dynamics.

    PubMed

    Yan, Xiaoxu; Wu, Qing; Sun, Jianyu; Liang, Peng; Zhang, Xiaoyuan; Xiao, Kang; Huang, Xia

    2016-01-01

    Geometry property would affect the hydrodynamics of membrane bioreactor (MBR), which was directly related to membrane fouling rate. The simulation of a bench-scale MBR by computational fluid dynamics (CFD) showed that the shear stress on membrane surface could be elevated by 74% if the membrane was sandwiched between two baffles (baffled MBR), compared with that without baffles (unbaffled MBR). The effects of horizontal geometry characteristics of a bench-scale membrane tank were discussed (riser length index Lr, downcomer length index Ld, tank width index Wt). Simulation results indicated that the average cross flow of the riser was negatively correlated to the ratio of riser and downcomer cross-sectional area. A relatively small tank width would also be preferable in promoting shear stress on membrane surface. The optimized MBR had a shear elevation of 21.3-91.4% compared with unbaffled MBR under same aeration intensity.

  5. Integrative approaches to computational biomedicine

    PubMed Central

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  6. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  7. Computational studies of flow through cross flow fans - effect of blade geometry

    NASA Astrophysics Data System (ADS)

    Govardhan, M.; Sampat, D. Lakshmana

    2005-09-01

    This present paper describes three dimensional computational analysis of complex internal flow in a cross flow fan. A commercial computational fluid dynamics (CFD) software code CFX was used for the computation. RNG k-ɛ two equation turbulence model was used to simulate the model with unstructured mesh. Sliding mesh interface was used at the interface between the rotating and stationary domains to capture the unsteady interactions. An accurate assessment of the present investigation is made by comparing various parameters with the available experimental data. Three impeller geometries with different blade angles and radius ratio are used in the present study. Maximum energy transfer through the impeller takes place in the region where the flow follows the blade curvature. Radial velocity is not uniform through blade channels. Some blades work in turbine mode at very low flow coefficients. Static pressure is always negative in and around the impeller region.

  8. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  9. Computational approaches to motor control.

    PubMed

    Flash, T; Sejnowski, T J

    2001-12-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors.

  10. Computational approaches to motor control

    PubMed Central

    Flash, Tamar; Sejnowski, Terrence J

    2010-01-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors. PMID:11741014

  11. Dependence of Monte Carlo microdosimetric computations on the simulation geometry of gold nanoparticles.

    PubMed

    Zygmanski, Piotr; Liu, Bo; Tsiamas, Panagiotis; Cifter, Fulya; Petersheim, Markus; Hesser, Jürgen; Sajo, Erno

    2013-11-21

    Recently, interactions of x-rays with gold nanoparticles (GNPs) and the resulting dose enhancement have been studied using several Monte Carlo (MC) codes (Jones et al 2010 Med. Phys. 37 3809-16, Lechtman et al 2011 Phys. Med. Biol. 56 4631-47, McMahon et al 2011 Sci. Rep. 1 1-9, Leung et al 2011 Med. Phys. 38 624-31). These MC simulations were carried out in simplified geometries and provided encouraging preliminary data in support of GNP radiotherapy. As these studies showed, radiation transport computations of clinical beams to obtain dose enhancement from nanoparticles has several challenges, mostly arising from the requirement of high spatial resolution and from the approximations used at the interface between the macroscopic clinical beam transport and the nanoscopic electron transport originating in the nanoparticle or its vicinity. We investigate the impact of MC simulation geometry on the energy deposition due to the presence of GNPs, including the effects of particle clustering and morphology. Dose enhancement due to a single and multiple GNPs using various simulation geometries is computed using GEANT4 MC radiation transport code. Various approximations in the geometry and in the phase space transition from macro- to micro-beams incident on GNPs are analyzed. Simulations using GEANT4 are compared to a deterministic code CEPXS/ONEDANT for microscopic (nm-µm) geometry. Dependence on the following microscopic (µ) geometry parameters is investigated: µ-source-to-GNP distance (µSAD), µ-beam size (µS), and GNP size (µC). Because a micro-beam represents clinical beam properties at the microscopic scale, the effect of using different types of micro-beams is also investigated. In particular, a micro-beam with the phase space of a clinical beam versus a plane-parallel beam with an equivalent photon spectrum is characterized. Furthermore, the spatial anisotropy of energy deposition around a nanoparticle is analyzed. Finally, dependence of dose enhancement

  12. Assessment and improvement of mapping algorithms for non-matching meshes and geometries in computational FSI

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe

    2016-05-01

    This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.

  13. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  14. Computational analysis of two-fluid edge plasma stability in tokamak geometries

    NASA Astrophysics Data System (ADS)

    Neiser, Tom; Baver, Derek; Carter, Troy; Myra, Jim; Snyder, Phil; Umansky, Maxim

    2013-10-01

    In H-mode, the edge pressure gradient is disrupted quasi-periodically by Edge Localized Modes (ELMs), which leads to confinement loss and places large heat loads on the divertor. This poster gives an overview of the peeling-ballooning model for ELM formation and presents recent results of 2DX, a fast eigenvalue code capable of solving equations of any fluid model. We use 2DX to solve reduced ideal MHD equations of two-fluid plasma in the R-Z plane, with toroidal mode number resolving the third dimension. Previously, 2DX has been successfully benchmarked against ELITE and BOUT + + for ballooning dominated cases in simple shifted circle geometries. We present follow-up work in simple geometry as well as similar benchmarks for full X-point geometry of DIII-D. We demonstrate 2DX's capability as computational tool that supports nonlinear codes with linear verification and as experimental tool to identify density limits, map the spatial distribution of eigenmodes and investigate marginal stability of the edge region.

  15. An interactive user-friendly approach to surface-fitting three-dimensional geometries

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1988-01-01

    A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.

  16. Answering Typical Student Questions in Hyperbolic Geometry: A Transformation Group Approach

    ERIC Educational Resources Information Center

    Reyes, Edgar N.; Gray, Elizabeth D.

    2002-01-01

    It is shown that the bisector of a segment of a geodesic and the bisector of an angle in hyperbolic geometry can be expressed in terms of points which are equidistant from the end points of the segment, and points that are equidistant from the rays of the angle, respectively. An important tool in the approach is that the shortest distance between…

  17. A computer program for fitting smooth surfaces to an aircraft configuration and other three dimensional geometries

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1975-01-01

    A computer program that uses a three-dimensional geometric technique for fitting a smooth surface to the component parts of an aircraft configuration is presented. The resulting surface equations are useful in performing various kinds of calculations in which a three-dimensional mathematical description is necessary. Programs options may be used to compute information for three-view and orthographic projections of the configuration as well as cross-section plots at any orientation through the configuration. The aircraft geometry input section of the program may be easily replaced with a surface point description in a different form so that the program could be of use for any three-dimensional surface equations.

  18. Computations of VSWR and mode conversion for complex gyrotron window geometries

    SciTech Connect

    Salop, A.; Caplan, M.

    1984-01-01

    A computational method is described for determining VSWR and mode conversion for complex gyrotron window geometries. Assuming symmetric TE/sub on/ modes propagating in a circular cross-section guide, containing the window, one can write the total solution to the wave equation as the sum of the incident wave plus a wave scattered from the dielectric window region. The equations can be reformulated in terms of the scattered wave, resulting in a Helmholtz wave equation with an inhomogeneous driving term corresponding to the polarization current of the dielectric. Solutions are obtained using a suitable modification of the wave equation solver OPNCAV, and reflection coefficients, VSWR's and mode conversion information are then derived from an analysis of the reflected and transmitted powers. VSWR computations for typical single- and double-disk windows agree with conventional impedance calculations to within about 1%. Results for more complicated curved-boundary windows which cannot be treated by the standard methods are discussed.

  19. Laser cone beam computed tomography scanner geometry for large volume 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Jordan, K. J.; Turnbull, D.; Batista, J. J.

    2013-06-01

    A new scanner geometry for fast optical cone-beam computed tomography is reported. The system consists of a low power laser beam, raster scanned, under computer control, through a transparent object in a refractive index matching aquarium. The transmitted beam is scattered from a diffuser screen and detected by a photomultiplier tube. Modest stray light is present in the projection images since only a single ray is present in the object during measurement and there is no imaging optics to introduce further stray light in the form of glare. A scan time of 30 minutes was required for 512 projections with a field of view of 12 × 18 cm. Initial performance from scanning a 15 cm diameter jar with black solutions is presented. Averaged reconstruction coefficients are within 2% along the height of the jar and within the central 85% of diameter, due to the index mismatch of the jar. Agreement with spectrometer measurements was better than 0.5% for a minimum transmission of 4% and within 4% for a dark, 0.1% transmission sample. This geometry's advantages include high dynamic range and low cost of scaling to larger (>15 cm) fields of view.

  20. [Geometry, analysis, and computation in mathematics and applied science]. Progress report

    SciTech Connect

    Hoffman, D.

    1994-02-01

    The principal investigators` work on a variety of pure and applied problems in Differential Geometry, Calculus of Variations and Mathematical Physics has been done in a computational laboratory and been based on interactive scientific computer graphics and high speed computation created by the principal investigators to study geometric interface problems in the physical sciences. We have developed software to simulate various physical phenomena from constrained plasma flow to the electron microscope imaging of the microstructure of compound materials, techniques for the visualization of geometric structures that has been used to make significant breakthroughs in the global theory of minimal surfaces, and graphics tools to study evolution processes, such as flow by mean curvature, while simultaneously developing the mathematical foundation of the subject. An increasingly important activity of the laboratory is to extend this environment in order to support and enhance scientific collaboration with researchers at other locations. Toward this end, the Center developed the GANGVideo distributed video software system and software methods for running lab-developed programs simultaneously on remote and local machines. Further, the Center operates a broadcast video network, running in parallel with the Center`s data networks, over which researchers can access stored video materials or view ongoing computations. The graphical front-end to GANGVideo can be used to make ``multi-media mail`` from both ``live`` computing sessions and stored materials without video editing. Currently, videotape is used as the delivery medium, but GANGVideo is compatible with future ``all-digital`` distribution systems. Thus as a byproduct of mathematical research, we are developing methods for scientific communication. But, most important, our research focuses on important scientific problems; the parallel development of computational and graphical tools is driven by scientific needs.

  1. Computational approach for probing the flow through artificial heart devices.

    PubMed

    Kiris, C; Kwak, D; Rogers, S; Chang, I D

    1997-11-01

    Computational fluid dynamics (CFD) has become an indispensable part of aerospace research and design. The solution procedure for incompressible Navier-Stokes equations can be used for biofluid mechanics research. The computational approach provides detailed knowledge of the flowfield complementary to that obtained by experimental measurements. This paper illustrates the extension of CFD techniques to artificial heart flow simulation. Unsteady incompressible Navier-Stokes equations written in three-dimensional generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudocompressibility approach. It uses an implicit upwind-differencing scheme together with the Gauss-Seidel line-relaxation method. The efficiency and robustness of the time-accurate formulation of the numerical algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated by experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapped grid embedding scheme are employed, respectively. Steady-state solutions for the flow through a tilting-disk heart valve are compared with experimental measurements. Good agreement is obtained. Aided by experimental data, the flow through an entire Penn State artificial heart model is computed.

  2. Quantifying normal geometric variation in human pulmonary lobar geometry from high resolution computed tomography.

    PubMed

    Chan, Ho-Fung; Clark, Alys R; Hoffman, Eric A; Malcolm, Duane T K; Tawhai, Merryn H

    2015-05-01

    Previous studies of the ex vivo lung have suggested significant intersubject variability in lung lobe geometry. A quantitative description of normal lung lobe shape would therefore have value in improving the discrimination between normal population variability in shape and pathology. To quantify normal human lobe shape variability, a principal component analysis (PCA) was performed on high resolution computed tomography (HRCT) imaging of the lung at full inspiration. Volumetric imaging from 22 never-smoking subjects (10 female and 12 male) with normal lung function was included in the analysis. For each subject, an initial finite element mesh geometry was generated from a group of manually selected nodes that were placed at distinct anatomical locations on the lung surface. Each mesh used cubic shape functions to describe the surface curvilinearity, and the mesh was fitted to surface data for each lobe. A PCA was performed on the surface meshes for each lobe. Nine principal components (PCs) were sufficient to capture >90% of the normal variation in each of the five lobes. The analysis shows that lobe size can explain between 20% and 50% of intersubject variability, depending on the lobe considered. Diaphragm shape was the next most significant intersubject difference. When the influence of lung size difference is removed, the angle of the fissures becomes the most significant shape difference, and the variability in relative lobe size becomes important. We also show how a lobe from an independent subject can be projected onto the study population's PCs, demonstrating potential for abnormalities in lobar geometry to be defined in a quantitative manner.

  3. Computational predictions of the embolus-trapping performance of an IVC filter in patient-specific and idealized IVC geometries.

    PubMed

    Aycock, Kenneth I; Campbell, Robert L; Lynch, Frank C; Manning, Keefe B; Craven, Brent A

    2017-06-27

    Embolus transport simulations are performed to investigate the dependence of inferior vena cava (IVC) filter embolus-trapping performance on IVC anatomy. Simulations are performed using a resolved two-way coupled computational fluid dynamics/six-degree-of-freedom approach. Three IVC geometries are studied: a straight-tube IVC, a patient-averaged IVC, and a patient-specific IVC reconstructed from medical imaging data. Additionally, two sizes of spherical emboli (3 and 5 mm in diameter) and two IVC orientations (supine and upright) are considered. The embolus-trapping efficiency of the IVC filter is quantified for each combination of IVC geometry, embolus size, and IVC orientation by performing 2560 individual simulations. The predicted embolus-trapping efficiencies of the IVC filter range from 10 to 100%, and IVC anatomy is found to have a significant influence on the efficiency results ([Formula: see text]). In the upright IVC orientation, greater secondary flow in the patient-specific IVC geometry decreases the filter embolus-trapping efficiency by 22-30 percentage points compared with the efficiencies predicted in the idealized straight-tube or patient-averaged IVCs. In a supine orientation, the embolus-trapping efficiency of the filter in the idealized IVCs decreases by 21-90 percentage points compared with the upright orientation. In contrast, the embolus-trapping efficiency is insensitive to IVC orientation in the patient-specific IVC. In summary, simulations predict that anatomical features of the IVC that are often neglected in the idealized models used for benchtop testing, such as iliac vein compression and anteroposterior curvature, generate secondary flow and mixing in the IVC and influence the embolus-trapping efficiency of IVC filters. Accordingly, inter-subject variability studies and additional embolus transport investigations that consider patient-specific IVC anatomy are recommended for future work.

  4. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  5. Computation of inviscid compressible flows about arbitrary geometries and moving boundaries

    NASA Astrophysics Data System (ADS)

    Bayyuk, Sami Alan

    2008-10-01

    The computational simulation of aerodynamic flows with moving boundaries has numerous scientific and practical motivations. In this work, a new technique for computation of inviscid, compressible flows about two-dimensional, arbitrarily-complex geometries that are allowed to undergo arbitrarily-complex motions or deformations is developed and studied. The computational technique is constructed from five main components: (i) an adaptive, Quadtree-based, Cartesian-Grid generation algorithm that divides the computational region into stationary square cells, with local refinement and coarsening to resolve the geometry of all internal boundaries, even as such boundaries move. The algorithm automatically clips cells that straddle boundaries to form arbitrary polygonal cells; (ii) a representation of internal boundaries as exact, infinitesimally-thin discontinuities separating two arbitrarily-different states. The exactness of this representation, and its preclusion of diffusive or dispersive effects while boundaries travel across the grid combines the advantages of Eulerian and Lagrangian methods and is the main distinguishing characteristic of the technique; (iii) a second-order-accurate Finite-Volume, Arbitrary Lagrangian-Eulerian, characteristic-based flow-solver. The discretization of the boundaries and their motion is matched with the discretization of the flux quadratures to ensure that the overall second-order-accurate discretization also satisfies The Geometric Conservation Laws; (iv) an algorithm for dynamic merging of the cells in the vicinity of internal boundaries to form composite cells that retain the same topologic configuration during individual boundary motion steps and can therefore be treated as deforming cells, eliminating the need to treat crossing of grid lines by moving boundaries. Cell merging is also used to circumvent the "small-cell problem" of non-boundary-conformal Cartesian Grids; and (v) a solution-adaptation algorithm for resolving flow

  6. Predicting the optimal geometry of microneedles and their array for dermal vaccination using a computational model.

    PubMed

    Römgens, Anne M; Bader, Dan L; Bouwstra, Joke A; Oomens, Cees W J

    2016-11-01

    Microneedle arrays have been developed to deliver a range of biomolecules including vaccines into the skin. These microneedles have been designed with a wide range of geometries and arrangements within an array. However, little is known about the effect of the geometry on the potency of the induced immune response. The aim of this study was to develop a computational model to predict the optimal design of the microneedles and their arrangement within an array. The three-dimensional finite element model described the diffusion and kinetics in the skin following antigen delivery with a microneedle array. The results revealed an optimum distance between microneedles based on the number of activated antigen presenting cells, which was assumed to be related to the induced immune response. This optimum depends on the delivered dose. In addition, the microneedle length affects the number of cells that will be involved in either the epidermis or dermis. By contrast, the radius at the base of the microneedle and release rate only minimally influenced the number of cells that were activated. The model revealed the importance of various geometric parameters to enhance the induced immune response. The model can be developed further to determine the optimal design of an array by adjusting its various parameters to a specific situation.

  7. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE PAGES

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...

    2016-09-22

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  8. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    SciTech Connect

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-09-22

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.

  9. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  10. Machine learning-based 3-D geometry reconstruction and modeling of aortic valve deformation using 3-D computed tomography images.

    PubMed

    Liang, Liang; Kong, Fanwei; Martin, Caitlin; Pham, Thuy; Wang, Qian; Duncan, James; Sun, Wei

    2017-05-01

    To conduct a patient-specific computational modeling of the aortic valve, 3-D aortic valve anatomic geometries of an individual patient need to be reconstructed from clinical 3-D cardiac images. Currently, most of computational studies involve manual heart valve geometry reconstruction and manual finite element (FE) model generation, which is both time-consuming and prone to human errors. A seamless computational modeling framework, which can automate this process based on machine learning algorithms, is desirable, as it can not only eliminate human errors and ensure the consistency of the modeling results but also allow fast feedback to clinicians and permits a future population-based probabilistic analysis of large patient cohorts. In this study, we developed a novel computational modeling method to automatically reconstruct the 3-D geometries of the aortic valve from computed tomographic images. The reconstructed valve geometries have built-in mesh correspondence, which bridges harmonically for the consequent FE modeling. The proposed method was evaluated by comparing the reconstructed geometries from 10 patients with those manually created by human experts, and a mean discrepancy of 0.69 mm was obtained. Based on these reconstructed geometries, FE models of valve leaflets were developed, and aortic valve closure from end systole to middiastole was simulated for 7 patients and validated by comparing the deformed geometries with those manually created by human experts, and a mean discrepancy of 1.57 mm was obtained. The proposed method offers great potential to streamline the computational modeling process and enables the development of a preoperative planning system for aortic valve disease diagnosis and treatment. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  12. Grid generation and inviscid flow computation about cranked-winged airplane geometries

    NASA Technical Reports Server (NTRS)

    Eriksson, L.-E.; Smith, R. E.; Wiese, M. R.; Farr, N.

    1987-01-01

    An algebraic grid generation procedure that defines a patched multiple-block grid system suitable for fighter-type aircraft geometries with fuselage and engine inlet, canard or horizontal tail, cranked delta wing and vertical fin has been developed. The grid generation is based on transfinite interpolation and requires little computational power. A finite-volume Euler solver using explicit Runge-Kutta time-stepping has been adapted to this grid system and implemented on the VPS-32 vector processor with a high degree of vectorization. Grids are presented for an experimental aircraft with fuselage, canard, 70-20-cranked wing, and vertical fin. Computed inviscid compressible flow solutions are presented for Mach 2 at 3.79, 7 and 10 deg angles of attack. Conmparisons of the 3.79 deg computed solutions are made with available full-potential flow and Euler flow solutions on the same configuration but with another grid system. The occurrence of an unsteady solution in the 10 deg angle of attack case is discussed.

  13. Grid generation and inviscid flow computation about cranked-winged airplane geometries

    NASA Technical Reports Server (NTRS)

    Eriksson, L.-E.; Smith, R. E.; Wiese, M. R.; Farr, N.

    1987-01-01

    An algebraic grid generation procedure that defines a patched multiple-block grid system suitable for fighter-type aircraft geometries with fuselage and engine inlet, canard or horizontal tail, cranked delta wing and vertical fin has been developed. The grid generation is based on transfinite interpolation and requires little computational power. A finite-volume Euler solver using explicit Runge-Kutta time-stepping has been adapted to this grid system and implemented on the VPS-32 vector processor with a high degree of vectorization. Grids are presented for an experimental aircraft with fuselage, canard, 70-20-cranked wing, and vertical fin. Computed inviscid compressible flow solutions are presented for Mach 2 at 3.79, 7 and 10 deg angles of attack. Conmparisons of the 3.79 deg computed solutions are made with available full-potential flow and Euler flow solutions on the same configuration but with another grid system. The occurrence of an unsteady solution in the 10 deg angle of attack case is discussed.

  14. A computational geometry framework for the optimisation of atom probe reconstructions.

    PubMed

    Felfer, Peter; Cairney, Julie

    2016-10-01

    In this paper, we present pathways for improving the reconstruction of atom probe data on a coarse (>10nm) scale, based on computational geometry. We introduce a way to iteratively improve an atom probe reconstruction by adjusting it, so that certain known shape criteria are fulfilled. This is achieved by creating an implicit approximation of the reconstruction through a barycentric coordinate transform. We demonstrate the application of these techniques to the compensation of trajectory aberrations and the iterative improvement of the reconstruction of a dataset containing a grain boundary. We also present a method for obtaining a hull of the dataset in both detector and reconstruction space. This maximises data utilisation, and can be used to compensate for ion trajectory aberrations caused by residual fields in the ion flight path through a 'master curve' and correct for overall shape deviations in the data.

  15. NASA geometry data exchange specification for computational fluid dynamics (NASA IGES)

    NASA Technical Reports Server (NTRS)

    Blake, Matthew W.; Kerr, Patricia A.; Thorp, Scott A.; Jou, Jin J.

    1994-01-01

    This document specifies a subset of an existing product data exchange specification that is widely used in industry and government. The existing document is called the Initial Graphics Exchange Specification. This document, a subset of IGES, is intended for engineers analyzing product performance using tools such as computational fluid dynamics (CFD) software. This document specifies how to define mathematically and exchange the geometric model of an object. The geometry is represented utilizing nonuniform rational B-splines (NURBS) curves and surfaces. Only surface models are represented; no solid model representation is included. This specification does not include most of the other types of product information available in IGES (e.g., no material properties or surface finish properties) and does not provide all the specific file format details of IGES. The data exchange protocol specified in this document is fully conforming to the American National Standard (ANSI) IGES 5.2.

  16. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  17. Examination of the three-dimensional geometry of cetacean flukes using computed tomography scans: hydrodynamic implications.

    PubMed

    Fish, Frank E; Beneski, John T; Ketten, Darlene R

    2007-06-01

    The flukes of cetaceans function in the hydrodynamic generation of forces for thrust, stability, and maneuverability. The three-dimensional geometry of flukes is associated with production of lift and drag. Data on fluke geometry were collected from 19 cetacean specimens representing eight odontocete genera (Delphinus, Globicephala, Grampus, Kogia, Lagenorhynchus, Phocoena, Stenella, Tursiops). Flukes were imaged as 1 mm thickness cross-sections using X-ray computer-assisted tomography. Fluke shapes were characterized quantitatively by dimensions of the chord, maximum thickness, and position of maximum thickness from the leading edge. Sections were symmetrical about the chordline and had a rounded leading edge and highly tapered trailing edge. The thickness ratio (maximum thickness/chord) among species increased from insertion on the tailstock to a maximum at 20% of span and then decreasing steadily to the tip. Thickness ratio ranged from 0.139 to 0.232. These low values indicate reduced drag while moving at high speed. The position of maximum thickness from the leading edge remained constant over the fluke span at an average for all species of 0.285 chord. The displacement of the maximum thickness reduces the tendency of the flow to separate from the fluke surface, potentially affecting stall patterns. Similarly, the relatively large leading edge radius allows greater lift generation and delays stall. Computational analysis of fluke profiles at 50% of span showed that flukes were generally comparable or better for lift generation than engineered foils. Tursiops had the highest lift coefficients, which were superior to engineered foils by 12-19%. Variation in the structure of cetacean flukes reflects different hydrodynamic characteristics that could influence swimming performance.

  18. New Protocols for Solving Geometric Calculation Problems Incorporating Dynamic Geometry and Computer Algebra Software.

    ERIC Educational Resources Information Center

    Schumann, Heinz; Green, David

    2000-01-01

    Discusses software for geometric construction, measurement, and calculation, and software for numerical calculation and symbolic analysis that allows for new approaches to the solution of geometric problems. Illustrates these computer-aided graphical, numerical, and algebraic methods of solution and discusses examples using the appropriate choice…

  19. New Protocols for Solving Geometric Calculation Problems Incorporating Dynamic Geometry and Computer Algebra Software.

    ERIC Educational Resources Information Center

    Schumann, Heinz; Green, David

    2000-01-01

    Discusses software for geometric construction, measurement, and calculation, and software for numerical calculation and symbolic analysis that allows for new approaches to the solution of geometric problems. Illustrates these computer-aided graphical, numerical, and algebraic methods of solution and discusses examples using the appropriate choice…

  20. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  1. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    NASA Astrophysics Data System (ADS)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  2. Two-phase flow in complex geometries: A diffuse domain approach

    PubMed Central

    Aland, S.; Voigt, A.

    2011-01-01

    We present a new method for simulating two-phase flows in complex geometries, taking into account contact lines separating immiscible incompressible components. We combine the diffuse domain method for solving PDEs in complex geometries with the diffuse-interface (phase-field) method for simulating multiphase flows. In this approach, the complex geometry is described implicitly by introducing a new phase-field variable, which is a smooth approximation of the characteristic function of the complex domain. The fluid and component concentration equations are reformulated and solved in larger regular domain with the boundary conditions being implicitly modeled using source terms. The method is straightforward to implement using standard software packages; we use adaptive finite elements here. We present numerical examples demonstrating the effectiveness of the algorithm. We simulate multiphase flow in a driven cavity on an extended domain and find very good agreement with results obtained by solving the equations and boundary conditions in the original domain. We then consider successively more complex geometries and simulate a droplet sliding down a rippled ramp in 2D and 3D, a droplet flowing through a Y-junction in a microfluidic network and finally chaotic mixing in a droplet flowing through a winding, serpentine channel. The latter example actually incorporates two different diffuse domains: one describes the evolving droplet where mixing occurs while the other describes the channel. PMID:21918638

  3. An immersed boundary computational model for acoustic scattering problems with complex geometries.

    PubMed

    Sun, Xiaofeng; Jiang, Yongsong; Liang, An; Jing, Xiaodong

    2012-11-01

    An immersed boundary computational model is presented in order to deal with the acoustic scattering problem by complex geometries, in which the wall boundary condition is treated as a direct body force determined by satisfying the non-penetrating boundary condition. Two distinct discretized grids are used to discrete the fluid domain and immersed boundary, respectively. The immersed boundaries are represented by Lagrangian points and the direct body force determined on these points is applied on the neighboring Eulerian points. The coupling between the Lagrangian points and Euler points is linked by a discrete delta function. The linearized Euler equations are spatially discretized with a fourth-order dispersion-relation-preserving scheme and temporal integrated with a low-dissipation and low-dispersion Runge-Kutta scheme. A perfectly matched layer technique is applied to absorb out-going waves and in-going waves in the immersed bodies. Several benchmark problems for computational aeroacoustic solvers are performed to validate the present method.

  4. Minimal curvature trajectories: Riemannian geometry concepts for slow manifold computation in chemical kinetics

    NASA Astrophysics Data System (ADS)

    Lebiedz, Dirk; Reinhardt, Volkmar; Siehr, Jochen

    2010-09-01

    In dissipative ordinary differential equation systems different time scales cause anisotropic phase volume contraction along solution trajectories. Model reduction methods exploit this for simplifying chemical kinetics via a time scale separation into fast and slow modes. The aim is to approximate the system dynamics with a dimension-reduced model after eliminating the fast modes by enslaving them to the slow ones via computation of a slow attracting manifold. We present a novel method for computing approximations of such manifolds using trajectory-based optimization. We discuss Riemannian geometry concepts as a basis for suitable optimization criteria characterizing trajectories near slow attracting manifolds and thus provide insight into fundamental geometric properties of multiple time scale chemical kinetics. The optimization criteria correspond to a suitable mathematical formulation of "minimal relaxation" of chemical forces along reaction trajectories under given constraints. We present various geometrically motivated criteria and the results of their application to four test case reaction mechanisms serving as examples. We demonstrate that accurate numerical approximations of slow invariant manifolds can be obtained.

  5. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  6. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective

    ERIC Educational Resources Information Center

    Lin, John Jr-Hung; Lin, Sunny S. J.

    2014-01-01

    The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…

  7. A Creative Arts Approach to Computer Programming.

    ERIC Educational Resources Information Center

    Greenberg, Gary

    1991-01-01

    Discusses "Object LOGO," a symbolic computer programing language for use in the creative arts. Describes the use of the program in approaching arts projects from textual, graphic, and musical perspectives. Suggests that use of the program can promote development of creative skills and humanities learning in general. (SG)

  8. New approach for 3D imaging and geometry modeling of the human inner ear.

    PubMed

    Vogel, U

    1999-01-01

    Obtaining high-resolution three-dimensional (3D) geometry data performs a necessary assumption for modeling cochlear mechanics. Preferably this procedure has to be done noninvasively to preserve the original morphology. Depending on the actual application, various levels of spatial resolution and tissue differentiation should be reached. Here a new approach is presented which allows 3D imaging of temporal bone specimens with intact regions of interest and spatial resolution currently in the 10-microm range, but providing capabilities for future enhancements down to the submicron level. The technique is based on microtomography by X-rays or synchrotron radiation respectively. The structural data are reconstructed and converted to geometry data by 3D image processing, and eventually transferred into simulation environments, e.g., Finite Element Analysis, but may also be used for general visualization tasks in research, clinics, and education.

  9. Projective geometry, duality and Plücker coordinates for geometric computations with determinants on GPUs

    NASA Astrophysics Data System (ADS)

    Sayin, Baris; Bekdaş, Gebrail; Nigdeli, Sinan Melih

    2017-07-01

    Many algorithms used are based on geometrical computation. There are several criteria in selecting appropriate algorithm from already known. Recently, the fastest algorithms have been preferred. On the contrary nowadays, algorithms with a high stability and acceptable algorithm complexity are preferred. Also today's technology and computer architecture, like GPU etc., plays a significant role for efficient large data processing. However, some algorithms are ill-conditioned due to numerical representation used; result of the floating point representation with a limited length of the mantissa. In this paper, relations between projective representation, duality and Plücker coordinates will be explored with demonstration on simple geometric examples. The presented approach is convenient especially for application on GPUs or vector-vector computational architectures.

  10. Transoesophageal ultrasound and computer tomographic assessment of the equine cricoarytenoid dorsalis muscle: Relationship between muscle geometry and exercising laryngeal function.

    PubMed

    Kenny, M; Cercone, M; Rawlinson, J J; Ducharme, N G; Bookbinder, L; Thompson, M; Cheetham, J

    2017-05-01

    Early detection of recurrent laryngeal neuropathy (RLN) is of considerable interest to the equine industry. To describe two imaging modalities, transoesophageal ultrasound (TEU) and computed tomography (CT) with multiplanar reconstruction to assess laryngeal muscle geometry, and determine the relationship between cricoarytenoid dorsalis (CAD) geometry and function. Two-phase study evaluating CAD geometry in experimental horses and horses with naturally occurring RLN. Equine CAD muscle volume was determined from CT scan sets using volumetric reconstruction with LiveWire. The midbody and caudal dorsal-ventral thickness of the CAD muscle was determined using a TEU in the same horses; and in horses with a range of severity of RLN (n = 112). Transoesophageal ultrasound was able to readily image the CAD muscles and lower left:right CAD thickness ratios were observed with increasing disease severity. Computed tomography based muscle volume correlated very closely with ex vivo muscle volume (R(2) = 0.77). Computed tomography reconstruction can accurately determine intrinsic laryngeal muscle geometry. A relationship between TEU measurements of CAD geometry and laryngeal function was established. These imaging techniques could be used to track the response of the CAD muscle to restorative surgical treatments such as nerve muscle pedicle graft, nerve anastomosis and functional electrical stimulation. © 2016 EVJ Ltd.

  11. A fully-coupled upwind discontinuous Galerkin method for incompressible porous media flows: High-order computations of viscous fingering instabilities in complex geometry

    NASA Astrophysics Data System (ADS)

    Scovazzi, G.; Huang, H.; Collis, S. S.; Yin, J.

    2013-11-01

    We present a new approach to the simulation of viscous fingering instabilities in incompressible, miscible displacement flows in porous media. In the past, high resolution computational simulations of viscous fingering instabilities have always been performed using high-order finite difference or Fourier-spectral methods which do not posses the flexibility to compute very complex subsurface geometries. Our approach, instead, by means of a fully-coupled nonlinear implementation of the discontinuous Galerkin method, possesses a fundamental differentiating feature, in that it maintains high-order accuracy on fully unstructured meshes. In addition, the proposed method shows very low sensitivity to mesh orientation, in contrast with classical finite volume approximation used in porous media flow simulations. The robustness and accuracy of the method are demonstrated in a number of challenging computational problems.

  12. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  13. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  14. Effect of inlet geometry on macrosegregation during the direct chill casting of 7050 alloy billets: experiments and computer modelling

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Eskin, D. G.; Miroux, A.; Subroto, T.; Katgerman, L.

    2012-07-01

    Controlling macrosegregation is one of the major challenges in direct-chill (DC) casting of aluminium alloys. In this paper, the effect of the inlet geometry (which influences the melt distribution) on macrosegregation during the DC casting of 7050 alloy billets was studied experimentally and by using 2D computer modelling. The ALSIM model was used to determine the temperature and flow patterns during DC casting. The results from the computer simulations show that the sump profiles and flow patterns in the billet are strongly influenced by the melt flow distribution determined by the inlet geometry. These observations were correlated to the actual macrosegregation patterns found in the as-cast billets produced by having two different inlet geometries. The macrosegregation analysis presented here may assist in determining the critical parameters to consider for improving the casting of 7XXX aluminium alloys.

  15. A Combined Geometric Approach for Computational Fluid Dynamics on Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    1995-01-01

    A combined geometric approach for computational fluid dynamics is presented for the analysis of unsteady flow about mechanisms in which its components are in moderate relative motion. For a CFD analysis, the total dynamics problem involves the dynamics of the aspects of geometry modeling, grid generation, and flow modeling. The interrelationships between these three aspects allow for a more natural formulation of the problem and the sharing of information which can be advantageous to the computation of the dynamics. The approach is applied to planar geometries with the use of an efficient multi-block, structured grid generation method to compute unsteady, two-dimensional and axisymmetric flow. The applications presented include the computation of the unsteady, inviscid flow about a hinged-flap with flap deflections and a high-speed inlet with centerbody motion as part of the unstart / restart operation.

  16. The Theory of Transactional Distance as a Framework for the Analysis of Computer-Aided Teaching of Geometry

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Dagdilelis, Vassilios

    2006-01-01

    In this paper, difficulties of students in the case of computer-mediated teaching of geometry in a traditional classroom are considered within the framework of "transactional distance", a concept well known in distance education. The main interest of this paper is to record and describe in detail the different forms of…

  17. Peer Interactions in a Computer Lab: Reflections on Results of a Case Study Involving Web-Based Dynamic Geometry Sketches

    ERIC Educational Resources Information Center

    Sinclair, Margaret P.

    2005-01-01

    A case study, originally set up to identify and describe some benefits and limitations of using dynamic web-based geometry sketches, provided an opportunity to examine peer interactions in a lab. Since classes were held in a computer lab, teachers and pairs faced the challenges of working and communicating in a lab environment. Research has shown…

  18. The Theory of Transactional Distance as a Framework for the Analysis of Computer-Aided Teaching of Geometry

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Dagdilelis, Vassilios

    2006-01-01

    In this paper, difficulties of students in the case of computer-mediated teaching of geometry in a traditional classroom are considered within the framework of "transactional distance", a concept well known in distance education. The main interest of this paper is to record and describe in detail the different forms of…

  19. Geometry, analysis, and computation in mathematics and applied sciences. Final report

    SciTech Connect

    Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

    1995-12-31

    Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

  20. Effects of frequency, irradiation geometry and polarisation on computation of SAR in human brain.

    PubMed

    Zhou, Hongmei; Su, Zhentao; Ning, Jing; Wang, Changzhen; Xie, Xiangdong; Qu, Decheng; Wu, Ke; Zhang, Xiaomin; Pan, Jie; Yang, Guoshan

    2014-12-01

    The power absorbed by the human brain has possible implications in the study of the central nervous system-related biological effects of electromagnetic fields. In order to determine the specific absorption rate (SAR) of radio frequency (RF) waves in the human brain, and to investigate the effects of geometry and polarisation on SAR value, the finite-difference time-domain method was applied for the SAR computation. An anatomically realistic model scaled to a height of 1.70 m and a mass of 63 kg was selected, which included 14 million voxels segmented into 39 tissue types. The results suggested that high SAR values were found in the brain, i.e. ∼250 MHz for vertical polarisation and 900-1200 MHz both for vertical and horizontal polarisation, which may be the result of head resonance at these frequencies. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Computational Flow Modeling of a Simplified Integrated Tractor-Trailer Geometry

    SciTech Connect

    Salari, K; McWherter-Payne, M

    2003-09-15

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  2. Computational flow modeling of a simplified integrated tractor-trailer geometry.

    SciTech Connect

    McWherter-Payne, Mary Anna; Salari, Kambiz

    2003-09-01

    For several years, Sandia National Laboratories and Lawrence Livermore National Laboratory have been part of a consortium funded by the Department of Energy to improve fuel efficiency of heavy vehicles such as Class 8 trucks through aerodynamic drag reduction. The objective of this work is to demonstrate the feasibility of using the steady Reynolds-Averaged Navier-Stokes (RANS) approach to predict the flow field around heavy vehicles, with special emphasis on the base region of the trailer, and to compute the aerodynamic forces. In particular, Sandia's computational fluid dynamics code, SACCARA, was used to simulate the flow on a simplified model of a tractor-trailer vehicle. The results are presented and compared with NASA Ames experimental data to assess the predictive capability of RANS to model the flow field and predict the aerodynamic forces.

  3. Computational approach to compact Riemann surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2017-01-01

    A purely numerical approach to compact Riemann surfaces starting from plane algebraic curves is presented. The critical points of the algebraic curve are computed via a two-dimensional Newton iteration. The starting values for this iteration are obtained from the resultants with respect to both coordinates of the algebraic curve and a suitable pairing of their zeros. A set of generators of the fundamental group for the complement of these critical points in the complex plane is constructed from circles around these points and connecting lines obtained from a minimal spanning tree. The monodromies are computed by solving the defining equation of the algebraic curve on collocation points along these contours and by analytically continuing the roots. The collocation points are chosen to correspond to Chebychev collocation points for an ensuing Clenshaw-Curtis integration of the holomorphic differentials which gives the periods of the Riemann surface with spectral accuracy. At the singularities of the algebraic curve, Puiseux expansions computed by contour integration on the circles around the singularities are used to identify the holomorphic differentials. The Abel map is also computed with the Clenshaw-Curtis algorithm and contour integrals. As an application of the code, solutions to the Kadomtsev-Petviashvili equation are computed on non-hyperelliptic Riemann surfaces.

  4. Computational Approaches to Nucleic Acid Origami.

    PubMed

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  5. Scalable, massively parallel approaches to upstream drainage area computation

    NASA Astrophysics Data System (ADS)

    Richardson, A.; Hill, C. N.; Perron, T.

    2011-12-01

    Accumulated drainage area maps of large regions are required for several applications. Among these are assessments of regional patterns of flow and sediment routing, high-resolution landscape evolution models in which drainage basin geometry evolves with time, and surveys of the characteristics of river basins that drain to continental margins. The computation of accumulated drainage areas is accomplished by inferring the vector field of drainage flow directions from a two-dimensional digital elevation map, and then computing the area that drains to each tile. From this map of elevations we can compute the integrated, upstream area that drains to each tile of the map. Generally this last step is done with a recursive algorithm, that accumulates upstream areas sequentially. The inherently serial nature of this restricts the number of tiles that can be included, thereby limiting the resolution of continental-size domains. This is because of the requirements of both memory, which will rise proportionally to the number of tiles, N, and computing time, which is O(N2). The fundamental sequential property of this approach prohibits effective use of large scale parallelism. An alternate method of calculating accumulated drainage area from drainage direction data can be arrived at by reformulating the problem as the solution of a system of simultaneous linear equations. The equations define the relation that the total upslope area of a particular tile is the sum of all the upslope areas for tiles immediately adjacent to that tile that drain to it, and the tile's own area. Solving these equations amounts to finding the solution of a sparse, nine-diagonal matrix operating on a vector for a right-hand-side that is simply the individual tile areas and where the diagonals of the matrix are determined by the landscape geometry. We show how an iterative method, Bi-CGSTAB, can be used to solve this problem in a scalable, massively parallel manner. However, this introduces

  6. Computer Automated Structure Evaluation (CASE) of the teratogenicity of retinoids with the aid of a novel geometry index

    NASA Astrophysics Data System (ADS)

    Klopman, Gilles; Dimayuga, Mario L.

    1990-06-01

    The CASE (Computer Automated Structure Evaluation) program, with the aid of a geometry index for discriminating cis and trans isomers, has been used to study a set of retinoids tested for teratogenicity in hamsters. CASE identified 8 fragments, the most important representing the non-polar terminus of a retinoid with an additional ring system which introduces some rigidity in the isoprenoid side chain. The geometry index helped to identify relevant fragments with an all- trans configuration and to distinguish them from irrelevant fragments with other configurations.

  7. A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2007-01-01

    Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.

  8. An integrated experimental and computational approach for ...

    EPA Pesticide Factsheets

    Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible transformation of one enantiomer into the racemic mixture) and enantiomerization (the reversible conversion of one enantiomer into the other) are poorly understood. To better understand these processes, we investigated the chiral fungicide, triadimefon, which undergoes racemization in soils, water, and organic solvents. Nuclear magnetic resonance (NMR) and gas chromatography / mass spectrometry (GC/MS) techniques were used to measure the rates of enantiomerization and racemization, deuterium isotope effects, and activation energies for triadimefon in H2O and D2O. From these results we were able to determine that: 1) the alpha-carbonyl carbon of triadimefon is the reaction site; 2) cleavage of the C-H (C-D) bond is the rate-determining step; 3) the reaction is base-catalyzed; and 4) the reaction likely involves a symmetrical intermediate. The B3LYP/6–311 + G** level of theory was used to compute optimized geometries, harmonic vibrational frequencies, nature population analysis, and intrinsic reaction coordinates for triadimefon in water and three racemization pathways were hypothesized. This work provides an initial step in developing predictive, structure-based models that are needed to

  9. An integrated experimental and computational approach for ...

    EPA Pesticide Factsheets

    Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible transformation of one enantiomer into the racemic mixture) and enantiomerization (the reversible conversion of one enantiomer into the other) are poorly understood. To better understand these processes, we investigated the chiral fungicide, triadimefon, which undergoes racemization in soils, water, and organic solvents. Nuclear magnetic resonance (NMR) and gas chromatography / mass spectrometry (GC/MS) techniques were used to measure the rates of enantiomerization and racemization, deuterium isotope effects, and activation energies for triadimefon in H2O and D2O. From these results we were able to determine that: 1) the alpha-carbonyl carbon of triadimefon is the reaction site; 2) cleavage of the C-H (C-D) bond is the rate-determining step; 3) the reaction is base-catalyzed; and 4) the reaction likely involves a symmetrical intermediate. The B3LYP/6–311 + G** level of theory was used to compute optimized geometries, harmonic vibrational frequencies, nature population analysis, and intrinsic reaction coordinates for triadimefon in water and three racemization pathways were hypothesized. This work provides an initial step in developing predictive, structure-based models that are needed to

  10. Effect of ocular shape and vascular geometry on retinal hemodynamics: a computational model.

    PubMed

    Dziubek, Andrea; Guidoboni, Giovanna; Harris, Alon; Hirani, Anil N; Rusjan, Edmond; Thistleton, William

    2016-08-01

    A computational model for retinal hemodynamics accounting for ocular curvature is presented. The model combines (i) a hierarchical Darcy model for the flow through small arterioles, capillaries and small venules in the retinal tissue, where blood vessels of different size are comprised in different hierarchical levels of a porous medium; and (ii) a one-dimensional network model for the blood flow through retinal arterioles and venules of larger size. The non-planar ocular shape is included by (i) defining the hierarchical Darcy flow model on a two-dimensional curved surface embedded in the three-dimensional space; and (ii) mapping the simplified one-dimensional network model onto the curved surface. The model is solved numerically using a finite element method in which spatial domain and hierarchical levels are discretized separately. For the finite element method, we use an exterior calculus-based implementation which permits an easier treatment of non-planar domains. Numerical solutions are verified against suitably constructed analytical solutions. Numerical experiments are performed to investigate how retinal hemodynamics is influenced by the ocular shape (sphere, oblate spheroid, prolate spheroid and barrel are compared) and vascular architecture (four vascular arcs and a branching vascular tree are compared). The model predictions show that changes in ocular shape induce non-uniform alterations of blood pressure and velocity in the retina. In particular, we found that (i) the temporal region is affected the least by changes in ocular shape, and (ii) the barrel shape departs the most from the hemispherical reference geometry in terms of associated pressure and velocity distributions in the retinal microvasculature. These results support the clinical hypothesis that alterations in ocular shape, such as those occurring in myopic eyes, might be associated with pathological alterations in retinal hemodynamics.

  11. Validation of Methods for Computational Catalyst Design: Geometries, Structures, and Energies of Neutral and Charged Silver Clusters

    SciTech Connect

    Duanmu, Kaining; Truhlar, Donald G.

    2015-04-30

    We report a systematic study of small silver clusters, Agn, Agn+, and Agn–, n = 1–7. We studied all possible isomers of clusters with n = 5–7. We tested 42 exchange–correlation functionals, and we assess these functionals for their accuracy in three respects: geometries (quantitative prediction of internuclear distances), structures (the nature of the lowest-energy structure, for example, whether it is planar or nonplanar), and energies. We find that the ingredients of exchange–correlation functionals are indicators of their success in predicting geometries and structures: local exchange–correlation functionals are generally better than hybrid functionals for geometries; functionals depending on kinetic energy density are the best for predicting the lowest-energy isomer correctly, especially for predicting two-dimensional to three-dimenstional transitions correctly. The accuracy for energies is less sensitive to the ingredient list. Our findings could be useful for guiding the selection of methods for computational catalyst design.

  12. Validation of Methods for Computational Catalyst Design: Geometries, Structures, and Energies of Neutral and Charged Silver Clusters

    SciTech Connect

    Duanmu, Kaining; Truhlar, Donald G.

    2015-04-30

    We report a systematic study of small silver clusters, Agn, Agn+, and Agn–, n = 1–7. We studied all possible isomers of clusters with n = 5–7. We tested 42 exchange–correlation functionals, and we assess these functionals for their accuracy in three respects: geometries (quantitative prediction of internuclear distances), structures (the nature of the lowest-energy structure, for example, whether it is planar or nonplanar), and energies. We find that the ingredients of exchange–correlation functionals are indicators of their success in predicting geometries and structures: local exchange–correlation functionals are generally better than hybrid functionals for geometries; functionals depending on kinetic energy density are the best for predicting the lowest-energy isomer correctly, especially for predicting two-dimensional to three-dimenstional transitions correctly. The accuracy for energies is less sensitive to the ingredient list. Our findings could be useful for guiding the selection of methods for computational catalyst design.

  13. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  14. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  15. Geometry acquisition and grid generation: Recent experiences with complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Gatzke, Timothy D.; Labozzetta, Walter F.; Cooley, John W.; Finfrock, Gregory P.

    1992-01-01

    Important issues involved in working with complex geometries are discussed. Approaches taken to address complex geometry issues in the McDonnell Aircraft Computational Grid System and related geometry processing tools are discussed. The efficiency of acquiring a suitable geometry definition, the need to manipulate the geometry, and the time and skill level required to generate the grid while preserving geometric fidelity are discussed.

  16. Computational Approach to Hyperelliptic Riemann Surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2015-03-01

    We present a computational approach to general hyperelliptic Riemann surfaces in Weierstrass normal form. The surface is given by a list of the branch points, the coefficients of the defining polynomial or a system of cuts for the curve. A canonical basis of the homology is introduced algorithmically for this curve. The periods of the holomorphic differentials and the Abel map are computed with the Clenshaw-Curtis method to achieve spectral accuracy. The code can handle almost degenerate Riemann surfaces. This work generalizes previous work on real hyperelliptic surfaces with prescribed cuts to arbitrary hyperelliptic surfaces. As an example, solutions to the sine-Gordon equation in terms of multi-dimensional theta functions are studied, also in the solitonic limit of these solutions.

  17. Computational approaches to fMRI analysis.

    PubMed

    Cohen, Jonathan D; Daw, Nathaniel; Engelhardt, Barbara; Hasson, Uri; Li, Kai; Niv, Yael; Norman, Kenneth A; Pillow, Jonathan; Ramadge, Peter J; Turk-Browne, Nicholas B; Willke, Theodore L

    2017-02-23

    Analysis methods in cognitive neuroscience have not always matched the richness of fMRI data. Early methods focused on estimating neural activity within individual voxels or regions, averaged over trials or blocks and modeled separately in each participant. This approach mostly neglected the distributed nature of neural representations over voxels, the continuous dynamics of neural activity during tasks, the statistical benefits of performing joint inference over multiple participants and the value of using predictive models to constrain analysis. Several recent exploratory and theory-driven methods have begun to pursue these opportunities. These methods highlight the importance of computational techniques in fMRI analysis, especially machine learning, algorithmic optimization and parallel computing. Adoption of these techniques is enabling a new generation of experiments and analyses that could transform our understanding of some of the most complex-and distinctly human-signals in the brain: acts of cognition such as thoughts, intentions and memories.

  18. Sculpting the band gap: a computational approach

    NASA Astrophysics Data System (ADS)

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-10-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations.

  19. Sculpting the band gap: a computational approach

    PubMed Central

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  20. The Fractal Geometry of Nature; Its Mathematical Basis and Application to Computer Graphics

    DTIC Science & Technology

    1986-01-01

    matematical cotstructr3. It was first popularized by complex renderings of terrain onl a conputer graphic.ý. medium. Fractal geometry has since...geometry has not yet been realized. In the final analysis , we expect that even the skeptical reader will discover the mathematical beauty and...Sinto 10,1] preserves the distribution from the original range ([0,231 - 1]). Analysis of the normalized uniform random numbers S0.0 0.1 = 52 0.1

  1. Computations of Viscous Flows in Complex Geometries Using Multiblock Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Ameri, Ali A.

    1995-01-01

    Generating high quality, structured, continuous, body-fitted grid systems (multiblock grid systems) for complicated geometries has long been a most labor-intensive and frustrating part of simulating flows in complicated geometries. Recently, new methodologies and software have emerged that greatly reduce the human effort required to generate high quality multiblock grid systems for complicated geometries. These methods and software require minimal input form the user-typically, only information about the topology of the block structure and number of grid points. This paper demonstrates the use of the new breed of multiblock grid systems in simulations of internal flows in complicated geometries. The geometry used in this study is a duct with a sudden expansion, a partition, and an array of cylindrical pins. This geometry has many of the features typical of internal coolant passages in turbine blades. The grid system used in this study was generated using a commercially available grid generator. The simulations were done using a recently developed flow solver, TRAF3D.MB, that was specially designed to use multiblock grid systems.

  2. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  3. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  4. A geometric calibration method for inverse geometry computed tomography using P-matrices.

    PubMed

    Slagowski, Jordan M; Dunkerley, David A P; Hatt, Charles R; Speidel, Michael A

    2016-02-27

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  5. A geometric calibration method for inverse geometry computed tomography using P-matrices

    NASA Astrophysics Data System (ADS)

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-03-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.

  6. A geometric calibration method for inverse geometry computed tomography using P-matrices

    PubMed Central

    Slagowski, Jordan M.; Dunkerley, David A. P.; Hatt, Charles R.; Speidel, Michael A.

    2016-01-01

    Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry. PMID:27375313

  7. Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches.

    PubMed

    Rutkowski, Tomasz M

    2016-07-01

    A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification.

  8. Aging adult skull vaults by applying the concept of fractal geometry to high-resolution computed tomography images.

    PubMed

    Obert, Martin; Seyfried, Maren; Schumacher, Falk; Krombach, Gabriele A; Verhoff, Marcel A

    2014-09-01

    Aging human remains is a critical issue in anthropology and forensic medicine, and the search for accurate, new age-estimation methods is ongoing. In our study, we, therefore, explored a new approach to investigate a possible correlation between age-at-death (aad) and geometric irregularities in the bone structure of human skull caps. We applied the concept of fractal geometry and fractal dimension D analysis to describe heterogeneity within the bone structure. A high-resolution flat-panel computed tomography scanner (eXplore Locus Ultra) was used to obtain 229,500 images from 221 male and 120 female (total 341) European human skulls. Automated image analysis software was developed to evaluate the fractal dimension D, using the mass radius method. The frontal and the occipital portions of the skull caps of adult females and males were investigated separately. The age dependence of the fractal dimension D was studied by correlation analysis, and the prediction accuracy of age-at-death (aad) estimates for individual observations was calculated. D values for human skull caps scatter strongly as a function of age. We found sex-dependent correlation coefficients (CC) between D and age for adults (females CC=-0.67; males CC=-0.05). Prediction errors for aad estimates for individual observations were in the range of ±18 years at a 75% confidence interval. The detailed quantitative description of age-dependent irregularities in the bone microarchitecture of skull vaults through fractal dimension analysis does not, as we had hoped, enable a new aging method. Severe scattering of the data leads to an estimation error that is too great for this method to be of practical relevance in aad estimates. Thus, we disclosed an interesting sex difference. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Using Dynamic Geometry and Computer Algebra Systems in Problem Based Courses for Future Engineers

    ERIC Educational Resources Information Center

    Tomiczková, Svetlana; Lávicka, Miroslav

    2015-01-01

    It is a modern trend today when formulating the curriculum of a geometric course at the technical universities to start from a real-life problem originated in technical praxis and subsequently to define which geometric theories and which skills are necessary for its solving. Nowadays, interactive and dynamic geometry software plays a more and more…

  10. Using Dynamic Geometry and Computer Algebra Systems in Problem Based Courses for Future Engineers

    ERIC Educational Resources Information Center

    Tomiczková, Svetlana; Lávicka, Miroslav

    2015-01-01

    It is a modern trend today when formulating the curriculum of a geometric course at the technical universities to start from a real-life problem originated in technical praxis and subsequently to define which geometric theories and which skills are necessary for its solving. Nowadays, interactive and dynamic geometry software plays a more and more…

  11. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  12. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  13. CasimirSim - A Tool to Compute Casimir Polder Forces for Nontrivial 3D Geometries

    SciTech Connect

    Sedmik, Rene; Tajmar, Martin

    2007-01-30

    The so-called Casimir effect is one of the most interesting macro-quantum effects. Being negligible on the macro-scale it becomes a governing factor below structure sizes of 1 {mu}m where it accounts for typically 100 kN m-2. The force does not depend on gravity, or electric charge but solely on the materials properties, and geometrical shape. This makes the effect a strong candidate for micro(nano)-mechanical devices M(N)EMS. Despite a long history of research the theory lacks a uniform description valid for arbitrary geometries which retards technical application. We present an advanced state-of-the-art numerical tool overcoming all the usual geometrical restrictions, capable of calculating arbitrary 3D geometries by utilizing the Casimir Polder approximation for the Casimir force.

  14. Effect of phenolic radicals on the geometry and electronic structure of DNA base pairs: computational study

    NASA Astrophysics Data System (ADS)

    Zarei, Mohammad; Seif, Abdolvahab; Azizi, Khaled; Zarei, Mohanna; Bahrami, Jamil

    2016-04-01

    In this paper, we show the reaction of a hydroxyl, phenyl and phenoxy radicals with DNA base pairs by the density functional theory (DFT) calculations. The influence of solvation on the mechanism is also presented by the same DFT calculations under the continuum solvation model. The results showed that hydroxyl, phenyl and phenoxy radicals increase the length of the nearest hydrogen bond of adjacent DNA base pair which is accompanied by decrease in the length of furthest hydrogen bond of DNA base pair. Also, hydroxyl, phenyl and phenoxy radicals influenced the dihedral angle between DNA base pairs. According to the results, hydrogen bond lengths between AT and GC base pairs in water solvent are longer than vacuum. All of presented radicals influenced the structure and geometry of AT and GC base pairs, but phenoxy radical showed more influence on geometry and electronic properties of DNA base pairs compared with the phenyl and hydroxyl radicals.

  15. CasimirSim — A Tool to Compute Casimir Polder Forces for Nontrivial 3D Geometries

    NASA Astrophysics Data System (ADS)

    Sedmik, René; Tajmar, Martin

    2007-01-01

    The so-called Casimir effect is one of the most interesting macro-quantum effects. Being negligible on the macro-scale it becomes a governing factor below structure sizes of 1 μm where it accounts for typically 100 kN m-2. The force does not depend on gravity, or electric charge but solely on the materials properties, and geometrical shape. This makes the effect a strong candidate for micro(nano)-mechanical devices M(N)EMS. Despite a long history of research the theory lacks a uniform description valid for arbitrary geometries which retards technical application. We present an advanced state-of-the-art numerical tool overcoming all the usual geometrical restrictions, capable of calculating arbitrary 3D geometries by utilizing the Casimir Polder approximation for the Casimir force.

  16. Computer-aided evaluation of the railway track geometry on the basis of satellite measurements

    NASA Astrophysics Data System (ADS)

    Specht, Cezary; Koc, Władysław; Chrostowski, Piotr

    2016-05-01

    In recent years, all over the world there has been a period of intensive development of GNSS (Global Navigation Satellite Systems) measurement techniques and their extension for the purpose of their applications in the field of surveying and navigation. Moreover, in many countries a rising trend in the development of rail transportation systems has been noticed. In this paper, a method of railway track geometry assessment based on mobile satellite measurements is presented. The paper shows the implementation effects of satellite surveying railway geometry. The investigation process described in the paper is divided on two phases. The first phase is the GNSS mobile surveying and the analysis obtained data. The second phase is the analysis of the track geometry using the flat coordinates from the surveying. The visualization of the measured route, separation and quality assessment of the uniform geometric elements (straight sections, arcs), identification of the track polygon (main directions and intersection angles) are discussed and illustrated by the calculation example within the article.

  17. Examining the Impact of an Integrative Method of Using Technology on Students' Achievement and Efficiency of Computer Usage and on Pedagogical Procedure in Geometry

    ERIC Educational Resources Information Center

    Gurevich, Irina; Gurev, Dvora

    2012-01-01

    In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…

  18. Computer Metaphors: Approaches to Computer Literacy for Educators.

    ERIC Educational Resources Information Center

    Peelle, Howard A.

    Because metaphors offer ready perspectives for comprehending something new, this document examines various metaphors educators might use to help students develop computer literacy. Metaphors described are the computer as person (a complex system worthy of respect), tool (perhaps the most powerful and versatile known to humankind), brain (both…

  19. An analytical approach to bistable biological circuit discrimination using real algebraic geometry.

    PubMed

    Siegal-Gaskins, Dan; Franco, Elisa; Zhou, Tiffany; Murray, Richard M

    2015-07-06

    Biomolecular circuits with two distinct and stable steady states have been identified as essential components in a wide range of biological networks, with a variety of mechanisms and topologies giving rise to their important bistable property. Understanding the differences between circuit implementations is an important question, particularly for the synthetic biologist faced with determining which bistable circuit design out of many is best for their specific application. In this work we explore the applicability of Sturm's theorem--a tool from nineteenth-century real algebraic geometry--to comparing 'functionally equivalent' bistable circuits without the need for numerical simulation. We first consider two genetic toggle variants and two different positive feedback circuits, and show how specific topological properties present in each type of circuit can serve to increase the size of the regions of parameter space in which they function as switches. We then demonstrate that a single competitive monomeric activator added to a purely monomeric (and otherwise monostable) mutual repressor circuit is sufficient for bistability. Finally, we compare our approach with the Routh-Hurwitz method and derive consistent, yet more powerful, parametric conditions. The predictive power and ease of use of Sturm's theorem demonstrated in this work suggest that algebraic geometric techniques may be underused in biomolecular circuit analysis.

  20. Along-strike complex geometry of subduction zones - an experimental approach

    NASA Astrophysics Data System (ADS)

    Midtkandal, I.; Gabrielsen, R. H.; Brun, J.-P.; Huismans, R.

    2012-04-01

    Recent knowledge of the great geometric and dynamic complexity insubduction zones, combined with new capacity for analogue mechanical and numerical modeling has sparked a number of studies on subduction processes. Not unexpectedly, such models reveal a complex relation between physical conditions during subduction initiation, strength profile of the subducting plate, the thermo-dynamic conditions and the subduction zones geometries. One rare geometrical complexity of subduction that remains particularly controversial, is the potential for polarity shift in subduction systems. The present experiments were therefore performed to explore the influence of the architecture, strength and strain velocity on complexities in subduction zones, focusing on along-strike variation of the collision zone. Of particular concern were the consequences for the geometry and kinematics of the transition zones between segments of contrasting subduction direction. Although the model design to some extent was inspired by the configuration along the Iberian - Eurasian suture zone, the results are also of significance for other orogens with complex along-strike geometries. The experiments were set up to explore the initial state of subduction only, and were accordingly terminated before slab subduction occurred. The model wasbuilt from layers of silicone putty and sand, tailored to simulate the assumed lithospheric geometries and strength-viscosity profiles along the plate boundary zone prior to contraction, and comprises two 'continental' plates separated by a thinner 'oceanic' plate that represents the narrow seaway. The experiment floats on a substrate of sodiumpolytungstate, representing mantle. 24 experimental runs were performed, varying the thickness (and thus strength) of the upper mantle lithosphere, as well as the strain rate. Keeping all other parameters identical for each experiment, the models were shortened by a computer-controlled jackscrew while time-lapse images were

  1. Approaches to Classroom-Based Computational Science.

    ERIC Educational Resources Information Center

    Guzdial, Mark

    Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…

  2. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  3. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    SciTech Connect

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-25

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO2 compounds which provides a rich chemical and structural polymorph space. We find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO2 substrates, where the VO2 brookite phase would be preferentially grown on the a-c TiO2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. These criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  4. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    DOE PAGES

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; ...

    2016-05-04

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane whilemore » the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.« less

  5. Nuclear-relaxed elastic and piezoelectric constants of materials: Computational aspects of two quantum-mechanical approaches.

    PubMed

    Erba, Alessandro; Caglioti, Dominique; Zicovich-Wilson, Claudio Marcelo; Dovesi, Roberto

    2017-02-15

    Two alternative approaches for the quantum-mechanical calculation of the nuclear-relaxation term of elastic and piezoelectric tensors of crystalline materials are illustrated and their computational aspects discussed: (i) a numerical approach based on the geometry optimization of atomic positions at strained lattice configurations and (ii) a quasi-analytical approach based on the evaluation of the force- and displacement-response internal-strain tensors as combined with the interatomic force-constant matrix. The two schemes are compared both as regards their computational accuracy and performance. The latter approach, not being affected by the many numerical parameters and procedures of a typical quasi-Newton geometry optimizer, constitutes a more reliable and robust mean to the evaluation of such properties, at a reduced computational cost for most crystalline systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. 3D geometry analysis of the medial meniscus--a statistical shape modeling approach.

    PubMed

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-10-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  7. 3D geometry analysis of the medial meniscus – a statistical shape modeling approach

    PubMed Central

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-01-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  8. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  9. Effective leaf area index retrieving from terrestrial point cloud data: coupling computational geometry application and Gaussian mixture model clustering

    NASA Astrophysics Data System (ADS)

    Jin, S.; Tamura, M.; Susaki, J.

    2014-09-01

    Leaf area index (LAI) is one of the most important structural parameters of forestry studies which manifests the ability of the green vegetation interacted with the solar illumination. Classic understanding about LAI is to consider the green canopy as integration of horizontal leaf layers. Since multi-angle remote sensing technique developed, LAI obliged to be deliberated according to the observation geometry. Effective LAI could formulate the leaf-light interaction virtually and precisely. To retrieve the LAI/effective LAI from remotely sensed data therefore becomes a challenge during the past decades. Laser scanning technique can provide accurate surface echoed coordinates with densely scanned intervals. To utilize the density based statistical algorithm for analyzing the voluminous amount of the 3-D points data is one of the subjects of the laser scanning applications. Computational geometry also provides some mature applications for point cloud data (PCD) processing and analysing. In this paper, authors investigated the feasibility of a new application for retrieving the effective LAI of an isolated broad leaf tree. Simplified curvature was calculated for each point in order to remove those non-photosynthetic tissues. Then PCD were discretized into voxel, and clustered by using Gaussian mixture model. Subsequently the area of each cluster was calculated by employing the computational geometry applications. In order to validate our application, we chose an indoor plant to estimate the leaf area, the correlation coefficient between calculation and measurement was 98.28 %. We finally calculated the effective LAI of the tree with 6 × 6 assumed observation directions.

  10. Computational issues of importance to the inverse recovery of epicardial potentials in a realistic heart-torso geometry.

    PubMed

    Messinger-Rapport, B J; Rudy, Y

    1989-11-01

    In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.

  11. Deterministic approach for unsteady rarefied flow simulations in complex geometries and its application to gas flows in microsystems

    NASA Astrophysics Data System (ADS)

    Chigullapalli, Sruti

    Micro-electro-mechanical systems (MEMS) are widely used in automotive, communications and consumer electronics applications with microactuators, micro gyroscopes and microaccelerometers being just a few examples. However, in areas where high reliability is critical, such as in aerospace and defense applications, very few MEMS technologies have been adopted so far. Further development of high frequency microsystems such as resonators, RF MEMS, microturbines and pulsed-detonation microengines require improved understanding of unsteady gas dynamics at the micro scale. Accurate computational simulation of such flows demands new approaches beyond the conventional formulations based on the macroscopic constitutive laws. This is due to the breakdown of the continuum hypothesis in the presence of significant non-equilibrium and rarefaction because of large gradients and small scales, respectively. More generally, the motion of molecules in a gas is described by the kinetic Boltzmann equation which is valid for arbitrary Knudsen numbers. However, due to the multidimensionality of the phase space and the complex non-linearity of the collision term, numerical solution of the Boltzmann equation is challenging for practical problems. In this thesis a fully deterministic, as opposed to a statistical, finite volume based three-dimensional solution of Boltzmann ES-BGK model kinetic equation is formulated to enable simulations of unsteady rarefied flows. The main goal of this research is to develop an unsteady rarefied solver integrated with finite volume method (FVM) solver in MEMOSA (MEMS Overall Simulation Administrator) developed by PRISM: NNSA center for Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM) at Purdue and apply it to study micro-scale gas damping. Formulation and verification of finite volume method for unsteady rarefied flow solver based on Boltzmann-ESBGK equations in arbitrary three-dimensional geometries are presented. The solver is

  12. Tumor growth in complex, evolving microenvironmental geometries: A diffuse domain approach

    PubMed Central

    Chen, Ying; Lowengrub, John S.

    2014-01-01

    We develop a mathematical model of tumor growth in complex, dynamic microenvironments with active, deformable membranes. Using a diffuse domain approach, the complex domain is captured implicitly using an auxiliary function and the governing equations are appropriately modified, extended and solved in a larger, regular domain. The diffuse domain method enables us to develop an efficient numerical implementation that does not depend on the space dimension or the microenvironmental geometry. We model homotypic cell-cell adhesion and heterotypic cell-basement membrane (BM) adhesion with the latter being implemented via a membrane energy that models cell-BM interactions. We incorporate simple models of elastic forces and the degradation of the BM and ECM by tumor-secreted matrix degrading enzymes. We investigate tumor progression and BM response as a function of cell-BM adhesion and the stiffness of the BM. We find tumor sizes tend to be positively correlated with cell-BM adhesion since increasing cell-BM adhesion results in thinner, more elongated tumors. Prior to invasion of the tumor into the stroma, we find a negative correlation between tumor size and BM stiffness as the elastic restoring forces tend to inhibit tumor growth. In order to model tumor invasion of the stroma, we find it necessary to downregulate cell-BM adhesiveness, which is consistent with experimental observations. A stiff BM promotes invasiveness because at early stages the opening in the BM created by MDE degradation from tumor cells tends to be narrower when the BM is stiffer. This requires invading cells to squeeze through the narrow opening and thus promotes fragmentation that then leads to enhanced growth and invasion. In three dimensions, the opening in the BM was found to increase in size even when the BM is stiff because of pressure induced by growing tumor clusters. A larger opening in the BM can increase the potential for further invasiveness by increasing the possibility that additional

  13. CMEIAS JFrad: a digital computing tool to discriminate the fractal geometry of landscape architectures and spatial patterns of individual cells in microbial biofilms.

    PubMed

    Ji, Zhou; Card, Kyle J; Dazzo, Frank B

    2015-04-01

    Image analysis of fractal geometry can be used to gain deeper insights into complex ecophysiological patterns and processes occurring within natural microbial biofilm landscapes, including the scale-dependent heterogeneities of their spatial architecture, biomass, and cell-cell interactions, all driven by the colonization behavior of optimal spatial positioning of organisms to maximize their efficiency in utilization of allocated nutrient resources. Here, we introduce CMEIAS JFrad, a new computing technology that analyzes the fractal geometry of complex biofilm architectures in digital landscape images. The software uniquely features a data-mining opportunity based on a comprehensive collection of 11 different mathematical methods to compute fractal dimension that are implemented into a wizard design to maximize ease-of-use for semi-automatic analysis of single images or fully automatic analysis of multiple images in a batch process. As examples of application, quantitative analyses of fractal dimension were used to optimize the important variable settings of brightness threshold and minimum object size in order to discriminate the complex architecture of freshwater microbial biofilms at multiple spatial scales, and also to differentiate the spatial patterns of individual bacterial cells that influence their cooperative interactions, resource use, and apportionment in situ. Version 1.0 of JFrad is implemented into a software package containing the program files, user manual, and tutorial images that will be freely available at http://cme.msu.edu/cmeias/. This improvement in computational image informatics will strengthen microscopy-based approaches to analyze the dynamic landscape ecology of microbial biofilm populations and communities in situ at spatial resolutions that range from single cells to microcolonies.

  14. Optimization of a DPI Inhaler: A Computational Approach.

    PubMed

    Milenkovic, Jovana; Alexopoulos, Aleck H; Kiparissides, Costas

    2017-03-01

    Alternate geometries of a commercial dry powder inhaler (DPI, i.e., Turbuhaler; AstraZeneca, London, UK) are proposed based on the simulation results obtained from a fluid and particle dynamic computational model, previously developed by Milenkovic et al. The alternate DPI geometries are constructed by simple alterations to components of the commercial inhaler device leading to smoother flow patterns in regions where significant particle-wall collisions occur. The modified DPIs are investigated under the same conditions of the original studies of Milenkovic et al. for a wide range of inhalation flow rates (i.e., 30-70 L/min). Based on the computational results in terms of total particle deposition and fine particle fraction, the modified DPIs were improved over the original design of the commercial device.

  15. Molecular geometry of vanadium dichloride and vanadium trichloride: a gas-phase electron diffraction and computational study.

    PubMed

    Varga, Zoltán; Vest, Brian; Schwerdtfeger, Peter; Hargittai, Magdolna

    2010-03-15

    The molecular geometries of VCl2 and VCl3 have been determined by computations and gas-phase electron diffraction (ED). The ED study is a reinvestigation of the previously published analysis for VCl2. The structure of the vanadium dichloride dimer has also been calculated. According to our joint ED and computational study, the evaporation of a solid sample of VCl2 resulted in about 66% vanadium trichloride and 34% vanadium dichloride in the vapor. Vanadium dichloride is unambiguously linear in its 4Sigma(g)+ ground electronic state. For VCl3, all computations yielded a Jahn-Teller-distorted ground-state structure of C(2v) symmetry. However, it lies merely less than 3 kJ/mol lower than the 3E'' state (D(3h) symmetry). Due to the dynamic nature of the Jahn-Teller effect in this case, rigorous distinction cannot be made between the planar models of either D(3h) symmetry or C(2v) symmetry for the equilibrium structure of VCl3. Furthermore, the presence of several low-lying excited electronic states of VCl3 is expected in the high-temperature vapor. To our knowledge, this is the first experimental and computational study of the VCl3 molecule.

  16. A frequentist approach to computer model calibration

    SciTech Connect

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.

  17. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  18. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  19. A 3D Computational fluid dynamics model validation for candidate molybdenum-99 target geometry

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Dale, Greg; Vorobieff, Peter

    2014-11-01

    Molybdenum-99 (99Mo) is the parent product of technetium-99m (99mTc), a radioisotope used in approximately 50,000 medical diagnostic tests per day in the U.S. The primary uses of this product include detection of heart disease, cancer, study of organ structure and function, and other applications. The US Department of Energy seeks new methods for generating 99Mo without the use of highly enriched uranium, to eliminate proliferation issues and provide a domestic supply of 99mTc for medical imaging. For this project, electron accelerating technology is used by sending an electron beam through a series of 100Mo targets. During this process a large amount of heat is created, which directly affects the operating temperature dictated by the tensile stress limit of the wall material. To maintain the required temperature range, helium gas is used as a cooling agent that flows through narrow channels between the target disks. In our numerical study, we investigate the cooling performance on a series of new geometry designs of the cooling channel. This research is supported by Los Alamos National Laboratory.

  20. Radiation characteristics of selected long wire antennas as a function of geometry using computer modeling techniques

    NASA Astrophysics Data System (ADS)

    Gillespie, Robert J., Sr.

    1986-12-01

    This thesis, sponsored by the Marine Corps Development and Education Command, Quantico, Va., examines the far field patterns of five high frequency long wire antenna configurations through the use of the Numerical Electromagnetics Code (NEC). Lossy ground and the effects of variations made to these structures are considered. The resulting far field patterns are contained in the appendix. The antenna configurations vary in length from 1.87 to 17.19 wavelengths and in their height above ground from 0.103 to 0.610 wavelengths. Variations in the antennas end-regions include: the use of a ground rod or radial screen attached to the transmitter, terminating the far end of the antenna, and varying the shape of the transmitter from a small box (radio-sized) to a large (vehicle-sized) configuration. It is concluded that both the antenna height and length determine the far field geometry, and that end-region variations also impact, though to a lesser degree, on the pattern. Tables of comparative results are provided.

  1. Gasdynamic approach to small plumes computation

    NASA Astrophysics Data System (ADS)

    Genkin, L.; Baer, M.; Falcovitz, J.

    1993-07-01

    The semi-inverse marching characteristics scheme SIMA was extended to treat rotational flows; it is applied to computation of free plumes, starting out from non-uniform nozzle exit flow that reflects substantial viscous effects. For lack of measurements of exit flow in small nozzles, the exit plane flow is approximated by introducing a Power Law Interpolation (PLI) between the exit plane center and lip values. Exit plane flow variables thus approximated, are Mach number, pressure, flow angle and stagnation temperature. This choice is guided by gasdynamic considerations of exhaust flow from small nozzles into vacuum. The PLI is adjusted so as to obtain a match between computations and measurements at intermediate range from the nozzle. Computed plumes were found to be in good agreement with five different sets of small plume experiments. Comparative computations were performed using two alternate methods: the Boynton-Simons point-source approximation, and SIMA computation that started out from a uniform exit flow. It is demonstrated that for small nozzles having an exit flow dominated by viscous effects, the combined SIMA/PLI computational method is reasonably accurate and is dearly superior to either of the two alternate methods.

  2. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  3. Experimental and Computational Study of the Flow past a Simplified Geometry of an Engine/Pylon/Wing Installation at low velocity/moderate incidence flight conditions

    NASA Astrophysics Data System (ADS)

    Bury, Yannick; Lucas, Matthieu; Bonnaud, Cyril; Joly, Laurent; ISAE Team; Airbus Team

    2014-11-01

    We study numerically and experimentally the vortices that develop past a model geometry of a wing equipped with pylon-mounted engine at low speed/moderate incidence flight conditions. For such configuration, the presence of the powerplant installation under the wing initiates a complex, unsteady vortical flow field at the nacelle/pylon/wing junctions. Its interaction with the upper wing boundary layer causes a drop of aircraft performances. In order to decipher the underlying physics, this study is initially conducted on a simplified geometry at a Reynolds number of 200000, based on the chord wing and on the freestream velocity. Two configurations of angle of attack and side-slip angle are investigated. This work relies on unsteady Reynolds Averaged Navier Stokes computations, oil flow visualizations and stereoscopic Particle Image Velocimetry measurements. The vortex dynamics thus produced is described in terms of vortex core position, intensity, size and turbulent intensity thanks to a vortex tracking approach. In addition, the analysis of the velocity flow fields obtained from PIV highlights the influence of the longitudinal vortex initiated at the pylon/wing junction on the separation process of the boundary layer near the upper wing leading-edge.

  4. Micro-computed tomographic analysis of the radial geometry of intrarenal artery-vein pairs in rats and rabbits: Comparison with light microscopy.

    PubMed

    Ngo, Jennifer P; Le, Bianca; Khan, Zohaib; Kett, Michelle M; Gardiner, Bruce S; Smith, David W; Melhem, Mayer M; Maksimenko, Anton; Pearson, James T; Evans, Roger G

    2017-08-10

    We assessed the utility of synchrotron-radiation micro-computed tomography (micro-CT) for quantification of the radial geometry of the renal cortical vasculature. The kidneys of nine rats and six rabbits were perfusion fixed and the renal circulation filled with Microfil. In order to assess shrinkage of Microfil, rat kidneys were imaged at the Australian Synchrotron immediately upon tissue preparation and then post fixed in paraformaldehyde and reimaged 24 hours later. The Microfil shrank only 2-5% over the 24 hour period. All subsequent micro-CT imaging was completed within 24 hours of sample preparation. After micro-CT imaging, the kidneys were processed for histological analysis. In both rat and rabbit kidneys, vascular structures identified in histological sections could be identified in two-dimensional (2D) micro-CT images from the original kidney. Vascular morphology was similar in the two sets of images. Radial geometry quantified by manual analysis of 2D images from micro-CT was consistent with corresponding data generated by light microscopy. However, due to limited spatial resolution when imaging a whole organ using contrast-enhanced micro-CT, only arteries ≥100 and ≥60 μm in diameter, for the rat and rabbit respectively, could be assessed. We conclude that it is feasible and valid to use micro-CT to quantify vascular geometry of the renal cortical circulation in both the rat and rabbit. However, a combination of light microscopic and micro-CT approaches are required to evaluate the spatial relationships between intrarenal arteries and veins over an extensive range of vessel size. © 2017 John Wiley & Sons Australia, Ltd.

  5. Computational insights into the S3 transfer reaction: A special case of double group transfer reaction featuring bicyclically delocalized aromatic transition state geometries.

    PubMed

    Algarra, Andrés G

    2017-08-15

    An unusual pericyclic process that involves the intermolecular transfer of thiozone (S3 ) is computationally described. The process can be considered as a special case of double group transfer reaction whereby the two migrating groups are connected to the same substituent, taking place in a concerted manner via transition states featuring two five-membered C2 S3 rings fused together. Analysis of the aromaticity at the TS geometries by computing NICS values at the (3,+1) RCPS as well as ACID calculations confirms the aromatic character of each C2 S3 ring, thus resulting in bicyclically delocalized aromatic structures. The free energy barriers for the transfer of S3 are relatively similar (40-50 kcal mol(-1) ) to those computed for typical double H group transfer reactions. The similarities and differences between these processes have been further analysed by applying ASM-EDA and NBO approaches to the model reactions between ethene and ethane, and ethene and 1,2,3-trithiolane. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Computer-based Approaches to Patient Education

    PubMed Central

    Lewis, Deborah

    1999-01-01

    All articles indexed in MEDLINE or CINAHL, related to the use of computer technology in patient education, and published in peer-reviewed journals between 1971 and 1998 were selected for review. Sixty-six articles, including 21 research-based reports, were identified. Forty-five percent of the studies were related to the management of chronic disease. Thirteen studies described an improvement in knowledge scores or clinical outcomes when computer-based patient education was compared with traditional instruction. Additional articles examined patients' computer experience, socioeconomic status, race, and gender and found no significant differences when compared with program outcomes. Sixteen of the 21 research-based studies had effect sizes greater than 0.5, indicating a significant change in the described outcome when the study subjects participated in computer-based patient education. The findings from this review support computer-based education as an effective strategy for transfer of knowledge and skill development for patients. The limited number of research studies (N = 21) points to the need for additional research. Recommendations for new studies include cost-benefit analysis and the impact of these new technologies on health outcomes over time. PMID:10428001

  7. Computational study of pulsatile blood flow in prototype vessel geometries of coronary segments

    PubMed Central

    Chaniotis, A.K.; Kaiktsis, L.; Katritsis, D.; Efstathopoulos, E.; Pantos, I.; Marmarellis, V.

    2010-01-01

    The spatial and temporal distributions of wall shear stress (WSS) in prototype vessel geometries of coronary segments are investigated via numerical simulation, and the potential association with vascular disease and specifically atherosclerosis and plaque rupture is discussed. In particular, simulation results of WSS spatio-temporal distributions are presented for pulsatile, non-Newtonian blood flow conditions for: (a) curved pipes with different curvatures, and (b) bifurcating pipes with different branching angles and flow division. The effects of non-Newtonian flow on WSS (compared to Newtonian flow) are found to be small at Reynolds numbers representative of blood flow in coronary arteries. Specific preferential sites of average low WSS (and likely atherogenesis) were found at the outer regions of the bifurcating branches just after the bifurcation, and at the outer-entry and inner-exit flow regions of the curved vessel segment. The drop in WSS was more dramatic at the bifurcating vessel sites (less than 5% of the pre-bifurcation value). These sites were also near rapid gradients of WSS changes in space and time – a fact that increases the risk of rupture of plaque likely to develop at these sites. The time variation of the WSS spatial distributions was very rapid around the start and end of the systolic phase of the cardiac cycle, when strong fluctuations of intravascular pressure were also observed. These rapid and strong changes of WSS and pressure coincide temporally with the greatest flexion and mechanical stresses induced in the vessel wall by myocardial motion (ventricular contraction). The combination of these factors may increase the risk of plaque rupture and thrombus formation at these sites. PMID:20400349

  8. Computational study of pulsatile blood flow in prototype vessel geometries of coronary segments.

    PubMed

    Chaniotis, A K; Kaiktsis, L; Katritsis, D; Efstathopoulos, E; Pantos, I; Marmarellis, V

    2010-01-01

    The spatial and temporal distributions of wall shear stress (WSS) in prototype vessel geometries of coronary segments are investigated via numerical simulation, and the potential association with vascular disease and specifically atherosclerosis and plaque rupture is discussed. In particular, simulation results of WSS spatio-temporal distributions are presented for pulsatile, non-Newtonian blood flow conditions for: (a) curved pipes with different curvatures, and (b) bifurcating pipes with different branching angles and flow division. The effects of non-Newtonian flow on WSS (compared to Newtonian flow) are found to be small at Reynolds numbers representative of blood flow in coronary arteries. Specific preferential sites of average low WSS (and likely atherogenesis) were found at the outer regions of the bifurcating branches just after the bifurcation, and at the outer-entry and inner-exit flow regions of the curved vessel segment. The drop in WSS was more dramatic at the bifurcating vessel sites (less than 5% of the pre-bifurcation value). These sites were also near rapid gradients of WSS changes in space and time - a fact that increases the risk of rupture of plaque likely to develop at these sites. The time variation of the WSS spatial distributions was very rapid around the start and end of the systolic phase of the cardiac cycle, when strong fluctuations of intravascular pressure were also observed. These rapid and strong changes of WSS and pressure coincide temporally with the greatest flexion and mechanical stresses induced in the vessel wall by myocardial motion (ventricular contraction). The combination of these factors may increase the risk of plaque rupture and thrombus formation at these sites.

  9. Multiphysics computations on cellular interaction in complex geometries and vortex-accelerated vorticity deposition in Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Peng, Gaozhu

    The cellular interactions during leukocyte margination and adhesion cascade in cardiovascular microcirculations are multi-scale and multiphysics phenomena, involving fluid flow, cell mechanics, chemical reaction kinetics and transport, fluid structure interaction. The vascular network in vivo has rather complicated topology unlike straight and flat channels and pipes where most biological experiments in vitro and numerical simulations are carried. A computational framework is formulated towards a goal of building a virtual blood vessel system to simulate the hydrodynamic and kinetic interactions of blood cells in complex vascular geometries, including vascular network bifurcations and irregular shapes of the endothelial monolayer lining the blood vessel lumen in vivo. Mixed front tracking, immersed boundary and ghost cell methods are applied. The codes are benchmarked and validated with five selected problems. We find that the erythrocyte-leukocyte interaction, leukocyte-leukocyte interaction, and vascular geometries play important roles in leukocyte margination, initial tethering and adhesion to the vascular endothelium. In part II of the dissertation, we studied the two-dimensional microscale Richtmyer-Meshkov interfaces and discovered the self-driven vortex-accelerated vorticity deposition (VAVD) process. Opposite-signed secondary vorticity deposited by the VAVD is rolled into vortex double layers which are extremely unstable and lead to enhanced fluid mixing. The VAVD process examined and the new quantification procedure, the circulation rate of change, comprise a new vortex paradigm for examining the effect of specific initial conditions on the evolution of Richtmyer-Meshkov and Rayleigh-Taylor interfaces through intermediate times.

  10. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression.

    PubMed

    Kroos, Julia M; Diez, Ibai; Cortes, Jesus M; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine.

  11. Real geometry gyrokinetic PIC computations of ion turbulence in tokamak discharges with SUMMIT/PG3EQ_NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Rhodes, Terry; Dimits, Andris; Shumaker, Dan

    2006-10-01

    The PG3EQ_NC module within the SUMMIT Gyrokinetic PIC FORTRAN90 Framework makes possible 3D nonlinear toroidal computations of ion turbulence in the real geometry of DIII-D discharges. This is accomplished with the use of local, field line following, quasi-ballooning coordinates and through a direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes, as well as Holger Saint John's PLOTEQ code for the (R, Z) position of each flux surface. The effect of real geometry is being elucidated with CYCLONE shot by comparing results for growth rates and diffusivities from PGEQ_NC to those of its circular counterpart. The PG3EQ_NC module is also being used to model ion channel turbulence in DIII-D discharges 118561 and 120327. Linear results will be compared to growth rate calculations with the GKS code. Nonlinear results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  12. Real geometry gyrokinetic PIC computations of ion turbulence in advanced tokamak discharges with SUMMIT/PG3EQ_/NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Decyk, Viktor; Rhodes, Terry; Dimits, Andris; Shumaker, Dan

    2006-04-01

    The PG3EQ_/NC module within the SUMMIT Gyrokinetic PIC FORTRAN90 Framework makes possible 3D nonlinear toroidal computations of ion turbulence in the real geometry of DIII-D discharges. This is accomplished with the use of local, field line following, quasi-ballooning coordinates and through a direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes, as well as Holger Saint John's PLOTEQ code for the (R, Z) position of each flux surface. The effect of real geometry is being elucidated with CYCLONE shot 81499 by comparing results from PGEQ_/NC to those of its circular counterpart. The PG3EQ_/NC module is also being used to model ion channel turbulence in advanced tokamak discharges 118561 and 120327. Linear results will be compared to growth rate calculations with the GKS code. Nonlinear results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  13. Geometry Shapes Propagation: Assessing the Presence and Absence of Cortical Symmetries through a Computational Model of Cortical Spreading Depression

    PubMed Central

    Kroos, Julia M.; Diez, Ibai; Cortes, Jesus M.; Stramaglia, Sebastiano; Gerardo-Giorda, Luca

    2016-01-01

    Cortical spreading depression (CSD), a depolarization wave which originates in the visual cortex and travels toward the frontal lobe, has been suggested to be one neural correlate of aura migraine. To the date, little is known about the mechanisms which can trigger or stop aura migraine. Here, to shed some light on this problem and, under the hypothesis that CSD might mediate aura migraine, we aim to study different aspects favoring or disfavoring the propagation of CSD. In particular, by using a computational neuronal model distributed throughout a realistic cortical mesh, we study the role that the geometry has in shaping CSD. Our results are two-fold: first, we found significant differences in the propagation traveling patterns of CSD, both intra and inter-hemispherically, revealing important asymmetries in the propagation profile. Second, we developed methods able to identify brain regions featuring a peculiar behavior during CSD propagation. Our study reveals dynamical aspects of CSD, which, if applied to subject-specific cortical geometry, might shed some light on how to differentiate between healthy subjects and those suffering migraine. PMID:26869913

  14. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  15. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  16. Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry Algorithms Library (CGAL)

    DTIC Science & Technology

    2015-06-01

    4302. Respondents should be aware that notwithstanding any other provision of law , no person shall be subject to a penalty for failing to comply with a...points in space from the voxels, either using their center points or corners. Next, a surface is generated by computing an -hull of the point set. Next...other provision of  law , no person shall be subject to any penalty  for failing to comply with a collection of  information  if  it does not display a

  17. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  18. Specific heat critical amplitudes and the approach to bulk criticality in parallel plate geometries

    NASA Astrophysics Data System (ADS)

    Leite, M. M.; Nemirovsky, A. M.; Coutinho-Filho, M. D.

    1992-02-01

    We calculate the universal ration A+/ A- of the specific heat critical amplitudes of an Ising system confined in a layered geometry of thickness L in the regime L/ξ≥1, where ξ is the bulk critical correlation length. Using field-theoretic renormalization-group techniques we determine A+/ A- under various surface boundary conditions for the local field.

  19. Kindergarteners' Achievement on Geometry and Measurement Units That Incorporate a Gifted Education Approach

    ERIC Educational Resources Information Center

    Casa, Tutita M.; Firmender, Janine M.; Gavin, M. Katherine; Carroll, Susan R.

    2017-01-01

    This research responds to the call by early childhood educators advocating for more challenging mathematics curriculum at the primary level. The kindergarten Project M[superscript 2] units focus on challenging geometry and measurement concepts by positioning students as practicing mathematicians. The research reported herein highlights the…

  20. Kindergarteners' Achievement on Geometry and Measurement Units That Incorporate a Gifted Education Approach

    ERIC Educational Resources Information Center

    Casa, Tutita M.; Firmender, Janine M.; Gavin, M. Katherine; Carroll, Susan R.

    2017-01-01

    This research responds to the call by early childhood educators advocating for more challenging mathematics curriculum at the primary level. The kindergarten Project M[superscript 2] units focus on challenging geometry and measurement concepts by positioning students as practicing mathematicians. The research reported herein highlights the…

  1. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  2. Computational Approach for Epitaxial Polymorph Stabilization through Substrate Selection

    SciTech Connect

    Ding, Hong; Dwaraknath, Shyam S.; Garten, Lauren; Ndione, Paul; Ginley, David; Persson, Kristin A.

    2016-05-04

    With the ultimate goal of finding new polymorphs through targeted synthesis conditions and techniques, we outline a computational framework to select optimal substrates for epitaxial growth using first principle calculations of formation energies, elastic strain energy, and topological information. To demonstrate the approach, we study the stabilization of metastable VO 2 compounds which provides a rich chemical and structural polymorph space. Here, we find that common polymorph statistics, lattice matching, and energy above hull considerations recommends homostructural growth on TiO 2 substrates, where the VO 2 brookite phase would be preferentially grown on the a-c TiO 2 brookite plane while the columbite and anatase structures favor the a-b plane on the respective TiO 2 phases. Overall, we find that a model which incorporates a geometric unit cell area matching between the substrate and the target film as well as the resulting strain energy density of the film provide qualitative agreement with experimental observations for the heterostructural growth of known VO 2 polymorphs: rutile, A and B phases. The minimal interfacial geometry matching and estimated strain energy criteria provide several suggestions for substrates and substrate-film orientations for the heterostructural growth of the hitherto hypothetical anatase, brookite, and columbite polymorphs. Our criteria serve as a preliminary guidance for the experimental efforts stabilizing new materials and/or polymorphs through epitaxy. The current screening algorithm is being integrated within the Materials Project online framework and data and hence publicly available.

  3. Optimization of the geometry and speed of a moving blocker system for cone-beam computed tomography scatter correction.

    PubMed

    Chen, Xi; Ouyang, Luo; Yan, Hao; Jia, Xun; Li, Bin; Lyu, Qingwen; Zhang, You; Wang, Jing

    2017-09-01

    X-ray scatter is a significant barrier to image quality improvements in cone-beam computed tomography (CBCT). A moving blocker-based strategy was previously proposed to simultaneously estimate scatter and reconstruct the complete volume within the field of view (FOV) from a single CBCT scan. A blocker consisting of lead stripes is inserted between the X-ray source and the imaging object, and moves back and forth along the rotation axis during gantry rotation. While promising results were obtained in our previous studies, the geometric design and moving speed of the blocker were set empirically. The goal of this work is to optimize the geometry and speed of the moving block system. Performance of the blocker was examined through Monte Carlo (MC) simulation and experimental studies with various geometry designs and moving speeds. All hypothetical designs employed an anthropomorphic pelvic phantom. The scatter estimation accuracy was quantified by using lead stripes ranging from 5 to 100 pixels on the detector plane. An iterative reconstruction based on total variation minimization was used to reconstruct CBCT images from unblocked projection data after scatter correction. The reconstructed image was evaluated under various combinations of lead strip width and interspace (ranging from 10 to 60 pixels) and different moving speed (ranging from 1 to 30 pixels per projection). MC simulation showed that the scatter estimation error varied from 0.8% to 5.8%. Phantom experiment showed that CT number error in the reconstructed CBCT images varied from 13 to 35. Highest reconstruction accuracy was achieved when the strip width was 20 pixels and interspace was 60 pixels and the moving speed was 15 pixels per projection. Scatter estimation can be achieved in a large range of lead strip width and interspace combinations. The moving speed does not have a very strong effect on reconstruction result if it is above 5 pixels per projection. Geometry design of the blocker affected image

  4. Coronary Artery Axial Plaque Stress and its Relationship With Lesion Geometry: Application of Computational Fluid Dynamics to Coronary CT Angiography.

    PubMed

    Choi, Gilwoo; Lee, Joo Myung; Kim, Hyun-Jin; Park, Jun-Bean; Sankaran, Sethuraman; Otake, Hiromasa; Doh, Joon-Hyung; Nam, Chang-Wook; Shin, Eun-Seok; Taylor, Charles A; Koo, Bon-Kwon

    2015-10-01

    The purpose of this study was to characterize the hemodynamic force acting on plaque and to investigate its relationship with lesion geometry. Coronary plaque rupture occurs when plaque stress exceeds plaque strength. Computational fluid dynamics was applied to 114 lesions (81 patients) from coronary computed tomography angiography. The axial plaque stress (APS) was computed by extracting the axial component of hemodynamic stress acting on stenotic lesions, and the axial lesion asymmetry was assessed by the luminal radius change over length (radius gradient [RG]). Lesions were divided into upstream-dominant (upstream RG > downstream RG) and downstream-dominant lesions (upstream RG < downstream RG) according to the RG. Thirty-three lesions (28.9%) showed net retrograde axial plaque force. Upstream APS linearly increased as lesion severity increased, whereas downstream APS exhibited a concave function for lesion severity. There was a negative correlation (r = -0.274, p = 0.003) between APS and lesion length. The pressure gradient, computed tomography-derived fractional flow reserve (FFRCT), and wall shear stress were consistently higher in upstream segments, regardless of the lesion asymmetry. However, APS was higher in the upstream segment of upstream-dominant lesions (11,371.96 ± 5,575.14 dyne/cm(2) vs. 6,878.14 ± 4,319.51 dyne/cm(2), p < 0.001), and in the downstream segment of downstream-dominant lesions (7,681.12 ± 4,556.99 dyne/cm(2) vs. 11,990.55 ± 5,556.64 dyne/cm(2), p < 0.001). Although there were no differences in FFRCT, % diameter stenosis, and wall shear stress pattern, the distribution of APS was different between upstream- and downstream-dominant lesions. APS uniquely characterizes the stenotic segment and has a strong relationship with lesion geometry. Clinical application of these hemodynamic and geometric indices may be helpful to assess the future risk of plaque rupture and to determine treatment strategy for patients with coronary artery

  5. Alchembed: A Computational Method for Incorporating Multiple Proteins into Complex Lipid Geometries

    PubMed Central

    2015-01-01

    A necessary step prior to starting any membrane protein computer simulation is the creation of a well-packed configuration of protein(s) and lipids. Here, we demonstrate a method, alchembed, that can simultaneously and rapidly embed multiple proteins into arrangements of lipids described using either atomistic or coarse-grained force fields. During a short simulation, the interactions between the protein(s) and lipids are gradually switched on using a soft-core van der Waals potential. We validate the method on a range of membrane proteins and determine the optimal soft-core parameters required to insert membrane proteins. Since all of the major biomolecular codes include soft-core van der Waals potentials, no additional code is required to apply this method. A tutorial is included in the Supporting Information. PMID:26089745

  6. A computational framework to characterize and compare the geometry of coronary networks.

    PubMed

    Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A

    2017-03-01

    This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Computation of leading edge film cooling from a CONSOLE geometry (CONverging Slot hOLE)

    NASA Astrophysics Data System (ADS)

    Guelailia, A.; Khorsi, A.; Hamidou, M. K.

    2016-01-01

    The aim of this study is to investigate the effect of mass flow rate on film cooling effectiveness and heat transfer over a gas turbine rotor blade with three staggered rows of shower-head holes which are inclined at 30° to the spanwise direction, and are normal to the streamwise direction on the blade. To improve film cooling effectiveness, the standard cylindrical holes, located on the leading edge region, are replaced with the converging slot holes (console). The ANSYS CFX has been used for this computational simulation. The turbulence is approximated by a k-ɛ model. Detailed film effectiveness distributions are presented for different mass flow rate. The numerical results are compared with experimental data.

  8. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  9. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  10. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  11. A declarative approach to visualizing concurrent computations

    SciTech Connect

    Roman, G.C.; Cox, K.C. )

    1989-10-01

    That visualization can play a key role in the exploration of concurrent computations is central to the ideas presented. Equally important, although given less emphasis, is concern that the full potential of visualization may not be reached unless the art of generating beautiful pictures is rooted in a solid, formally technical foundation. The authors show that program verification provides a formal framework around which such a foundation can be built. Making these ideas a practical reality will require both research and experimentation.

  12. Computational Approaches to Drug Repurposing and Pharmacology

    PubMed Central

    Hodos, Rachel A; Kidd, Brian A; Khader, Shameer; Readhead, Ben P; Dudley, Joel T

    2016-01-01

    Data in the biological, chemical, and clinical domains are accumulating at ever-increasing rates and have the potential to accelerate and inform drug development in new ways. Challenges and opportunities now lie in developing analytic tools to transform these often complex and heterogeneous data into testable hypotheses and actionable insights. This is the aim of computational pharmacology, which uses in silico techniques to better understand and predict how drugs affect biological systems, which can in turn improve clinical use, avoid unwanted side effects, and guide selection and development of better treatments. One exciting application of computational pharmacology is drug repurposing- finding new uses for existing drugs. Already yielding many promising candidates, this strategy has the potential to improve the efficiency of the drug development process and reach patient populations with previously unmet needs such as those with rare diseases. While current techniques in computational pharmacology and drug repurposing often focus on just a single data modality such as gene expression or drug-target interactions, we rationalize that methods such as matrix factorization that can integrate data within and across diverse data types have the potential to improve predictive performance and provide a fuller picture of a drug's pharmacological action. PMID:27080087

  13. A Social Construction Approach to Computer Science Education

    ERIC Educational Resources Information Center

    Machanick, Philip

    2007-01-01

    Computer science education research has mostly focused on cognitive approaches to learning. Cognitive approaches to understanding learning do not account for all the phenomena observed in teaching and learning. A number of apparently successful educational approaches, such as peer assessment, apprentice-based learning and action learning, have…

  14. A bionic approach to mathematical modeling the fold geometry of deployable reflector antennas on satellites

    NASA Astrophysics Data System (ADS)

    Feng, C. M.; Liu, T. S.

    2014-10-01

    Inspired from biology, this study presents a method for designing the fold geometry of deployable reflectors. Since the space available inside rockets for transporting satellites with reflector antennas is typically cylindrical in shape, and its cross-sectional area is considerably smaller than the reflector antenna after deployment, the cross-sectional area of the folded reflector must be smaller than the available rocket interior space. Membrane reflectors in aerospace are a type of lightweight structure that can be packaged compactly. To design membrane reflectors from the perspective of deployment processes, bionic applications from morphological changes of plants are investigated. Creating biologically inspired reflectors, this paper deals with fold geometry of reflectors, which imitate flower buds. This study uses mathematical formulation to describe geometric profiles of flower buds. Based on the formulation, new designs for deployable membrane reflectors derived from bionics are proposed. Adjusting parameters in the formulation of these designs leads to decreases in reflector area before deployment.

  15. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  16. Error regions in quantum state tomography: computational complexity caused by geometry of quantum states

    NASA Astrophysics Data System (ADS)

    Suess, Daniel; Rudnicki, Łukasz; maciel, Thiago O.; Gross, David

    2017-09-01

    The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However—the large number of partial results and heuristics notwithstanding—no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. We prove two versions of the result: one for frequentist and one for Bayesian statistics.

  17. Geometry and Topology of Two-Dimensional Dry Foams: Computer Simulation and Experimental Characterization.

    PubMed

    Tong, Mingming; Cole, Katie; Brito-Parada, Pablo R; Neethling, Stephen; Cilliers, Jan J

    2017-04-05

    Pseudo-two-dimensional (2D) foams are commonly used in foam studies as it is experimentally easier to measure the bubble size distribution and other geometric and topological properties of these foams than it is for a 3D foam. Despite the widespread use of 2D foams in both simulation and experimental studies, many important geometric and topological relationships are still not well understood. Film size, for example, is a key parameter in the stability of bubbles and the overall structure of foams. The relationship between the size distribution of the films in a foam and that of the bubbles themselves is thus a key relationship in the modeling and simulation of unstable foams. This work uses structural simulation from Surface Evolver to statistically analyze this relationship and to ultimately formulate a relationship for the film size in 2D foams that is shown to be valid across a wide range of different bubble polydispersities. These results and other topological features are then validated using digital image analysis of experimental pseudo-2D foams produced in a vertical Hele-Shaw cell, which contains a monolayer of bubbles between two plates. From both the experimental and computational results, it is shown that there is a distribution of sizes that a film can adopt and that this distribution is very strongly dependent on the sizes of the two bubbles to which the film is attached, especially the smaller one, but that it is virtually independent of the underlying polydispersity of the foam.

  18. Computational analysis of a rarefied hypersonic flow over combined gap/step geometries

    NASA Astrophysics Data System (ADS)

    Leite, P. H. M.; Santos, W. F. N.

    2015-06-01

    This work describes a computational analysis of a hypersonic flow over a combined gap/step configuration at zero degree angle of attack, in chemical equilibrium and thermal nonequilibrium. Effects on the flowfield structure due to changes on the step frontal-face height have been investigated by employing the Direct Simulation Monte Carlo (DSMC) method. The work focuses the attention of designers of hypersonic configurations on the fundamental parameter of surface discontinuity, which can have an important impact on even initial designs. The results highlight the sensitivity of the primary flowfield properties, velocity, density, pressure, and temperature due to changes on the step frontal-face height. The analysis showed that the upstream disturbance in the gap/step configuration increased with increasing the frontal-face height. In addition, it was observed that the separation region for the gap/step configuration increased with increasing the step frontal-face height. It was found that density and pressure for the gap/step configuration dramatically increased inside the gap as compared to those observed for the gap configuration, i. e., a gap without a step.

  19. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  20. A correlative microscopy approach relates microtubule behaviour, local organ geometry, and cell growth at the Arabidopsis shoot apical meristem

    PubMed Central

    Burian, Agata; Uyttewaal, Magalie

    2013-01-01

    Cortical microtubules (CMTs) are often aligned in a particular direction in individual cells or even in groups of cells and play a central role in the definition of growth anisotropy. How the CMTs themselves are aligned is not well known, but two hypotheses have been proposed. According to the first hypothesis, CMTs align perpendicular to the maximal growth direction, and, according to the second, CMTs align parallel to the maximal stress direction. Since both hypotheses were formulated on the basis of mainly qualitative assessments, the link between CMT organization, organ geometry, and cell growth is revisited using a quantitative approach. For this purpose, CMT orientation, local curvature, and growth parameters for each cell were measured in the growing shoot apical meristem (SAM) of Arabidopsis thaliana. Using this approach, it has been shown that stable CMTs tend to be perpendicular to the direction of maximal growth in cells at the SAM periphery, but parallel in the cells at the boundary domain. When examining the local curvature of the SAM surface, no strict correlation between curvature and CMT arrangement was found, which implies that SAM geometry, and presumed geometry-derived stress distribution, is not sufficient to prescribe the CMT orientation. However, a better match between stress and CMTs was found when mechanical stress derived from differential growth was also considered. PMID:24153420

  1. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  2. Acoustic gravity waves: A computational approach

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Dutt, P. K.

    1987-01-01

    This paper discusses numerical solutions of a hyperbolic initial boundary value problem that arises from acoustic wave propagation in the atmosphere. Field equations are derived from the atmospheric fluid flow governed by the Euler equations. The resulting original problem is nonlinear. A first order linearized version of the problem is used for computational purposes. The main difficulty in the problem as with any open boundary problem is in obtaining stable boundary conditions. Approximate boundary conditions are derived and shown to be stable. Numerical results are presented to verify the effectiveness of these boundary conditions.

  3. Real geometry gyrokinetic PIC computations of ion turbulence in advanced tokamak discharges with SUMMIT/PG3EQ/NC

    NASA Astrophysics Data System (ADS)

    Leboeuf, Jean-Noel; Dimits, Andris; Shumaker, Dan

    2005-10-01

    Development of the PG3EQ/NC module within the SUMMIT gyrokinetic PIC FORTRAN 90 framework is largely completed. It provides SUMMIT with the capability of performing 3D nonlinear toroidal gyrokinetic computations of ion turbulence in real DIII-D geometry. PG3EQ/NC uses local, field line following, quasi-ballooning coordinates and direct interface with DIII-D equilibrium data via the EFIT and ONETWO codes. In addition, Holger Saint John's PLOTEQ code is used to determine the (r,z) position of each flux surface. The thus initialized SUMMIT computations have been carried out for shot /118561 at times 01450 and 02050 at many of the 51 flux surfaces from the core to the edge. Linear SUMMIT results will be compared to available data from calculations with the GKS code for the same discharges. Nonlinear SUMMIT results will also be compared with scattering measurements of turbulence, as well as with accessible measurements of fluctuation amplitudes and spectra from other diagnostics.

  4. Evaluation and optimization of the performance of frame geometries for lithium-ion battery application by computer simulation

    SciTech Connect

    Miranda, D.; Miranda, F.; Costa, C. M.; Almeida, A. M.; Lanceros-Méndez, S.

    2016-06-08

    Tailoring battery geometries is essential for many applications, as geometry influences the delivered capacity value. Two geometries, frame and conventional, have been studied and, for a given scan rate of 330C, the square frame shows a capacity value of 305,52 Ahm{sup −2}, which is 527 times higher than the one for the conventional geometry for a constant the area of all components.

  5. Evaluation and optimization of the performance of frame geometries for lithium-ion battery application by computer simulation

    NASA Astrophysics Data System (ADS)

    Miranda, D.; Miranda, F.; Costa, C. M.; Almeida, A. M.; Lanceros-Méndez, S.

    2016-06-01

    Tailoring battery geometries is essential for many applications, as geometry influences the delivered capacity value. Two geometries, frame and conventional, have been studied and, for a given scan rate of 330C, the square frame shows a capacity value of 305,52 Ahm-2, which is 527 times higher than the one for the conventional geometry for a constant the area of all components.

  6. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  7. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  8. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  9. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  10. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  11. Computational approaches to natural product discovery.

    PubMed

    Medema, Marnix H; Fischbach, Michael A

    2015-09-01

    Starting with the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering longstanding questions in microbial ecology regarding the roles of metabolites in interspecies interactions.

  12. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  13. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  14. Spatial stochastic and analytical approaches to describe the complex hydraulic variability inherent channel geometry

    NASA Astrophysics Data System (ADS)

    Hadadin, N.

    2011-07-01

    The effects of basin hydrology on channel hydraulic variability for incised streams were investigated using available field data sets and models of watershed hydrology and channel hydraulics for Yazoo River Basin, USA. The study presents the hydraulic relations of bankfull discharge, channel width, mean depth, cross- sectional area, longitudinal slope, unit stream power, and runoff production as a function of drainage area using simple linear regression. The hydraulic geometry relations were developed for sixty one streams, twenty of them are classified as channel evaluation model (CEM) Types IV and V and forty one of them are streams of CEM Types II and III. These relationships are invaluable to hydraulic and water resources engineers, hydrologists, and geomorphologists, involved in stream restoration and protection. These relations can be used to assist in field identification of bankfull stage and stream dimension in un-gauged watersheds as well as estimation of the comparative stability of a stream channel. Results of this research show good fit of hydraulic geometry relationships in the Yazoo River Basin. The relations indicate that bankfull discharge, channel width, mean depth, cross-sectional area have stronger correlation to changes in drainage area than the longitudinal slope, unit stream power, and runoff production for streams CEM Types II and III. The hydraulic geometry relations show that runoff production, bankfull discharge, cross-sectional area, and unit stream power are much more responsive to changes in drainage area than are channel width, mean depth, and slope for streams of CEM Types IV and V. Also, the relations show that bankfull discharge and cross-sectional area are more responsive to changes in drainage area than are other hydraulic variables for streams of CEM Types II and III. The greater the regression slope, the more responsive to changes in drainage area will be.

  15. Geometry-driven diffusion: an alternative approach to image filtering/segmentation in diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Bajla, Ivan

    1998-02-01

    The major goal of this survey is to provide the reader with the motivation of image filtering and segmentation in diagnostic imaging, with the brief overview of the state-of- the-art of nonlinear filters based on the geometry-driven diffusion (GDD), and with a possible generalization of the GDD-filtering towards the complex problem of image segmentation, stated as minimization of particular energy functionals. An example of the application of the GDD- filtering to the task of 3D visualization of MRI data of the brain is illustrated and discussed in the paper.

  16. Computational approach to the study of thermal spin crossover phenomena

    SciTech Connect

    Rudavskyi, Andrii; Broer, Ria; Sousa, Carmen

    2014-05-14

    The key parameters associated to the thermally induced spin crossover process have been calculated for a series of Fe(II) complexes with mono-, bi-, and tridentate ligands. Combination of density functional theory calculations for the geometries and for normal vibrational modes, and highly correlated wave function methods for the energies, allows us to accurately compute the entropy variation associated to the spin transition and the zero-point corrected energy difference between the low- and high-spin states. From these values, the transition temperature, T{sub 1/2}, is estimated for different compounds.

  17. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  18. In silico drug discovery approaches on grid computing infrastructures.

    PubMed

    Wolf, Antje; Shahid, Mohammad; Kasam, Vinod; Ziegler, Wolfgang; Hofmann-Apitius, Martin

    2010-02-01

    The first step in finding a "drug" is screening chemical compound databases against a protein target. In silico approaches like virtual screening by molecular docking are well established in modern drug discovery. As molecular databases of compounds and target structures are becoming larger and more and more computational screening approaches are available, there is an increased need in compute power and more complex workflows. In this regard, computational Grids are predestined and offer seamless compute and storage capacity. In recent projects related to pharmaceutical research, the high computational and data storage demands of large-scale in silico drug discovery approaches have been addressed by using Grid computing infrastructures, in both; pharmaceutical industry as well as academic research. Grid infrastructures are part of the so-called eScience paradigm, where a digital infrastructure supports collaborative processes by providing relevant resources and tools for data- and compute-intensive applications. Substantial computing resources, large data collections and services for data analysis are shared on the Grid infrastructure and can be mobilized on demand. This review gives an overview on the use of Grid computing for in silico drug discovery and tries to provide a vision of future development of more complex and integrated workflows on Grids, spanning from target identification and target validation via protein-structure and ligand dependent screenings to advanced mining of large scale in silico experiments.

  19. Computational approaches for RNA energy parameter estimation

    PubMed Central

    Andronescu, Mirela; Condon, Anne; Hoos, Holger H.; Mathews, David H.; Murphy, Kevin P.

    2010-01-01

    Methods for efficient and accurate prediction of RNA structure are increasingly valuable, given the current rapid advances in understanding the diverse functions of RNA molecules in the cell. To enhance the accuracy of secondary structure predictions, we developed and refined optimization techniques for the estimation of energy parameters. We build on two previous approaches to RNA free-energy parameter estimation: (1) the Constraint Generation (CG) method, which iteratively generates constraints that enforce known structures to have energies lower than other structures for the same molecule; and (2) the Boltzmann Likelihood (BL) method, which infers a set of RNA free-energy parameters that maximize the conditional likelihood of a set of reference RNA structures. Here, we extend these approaches in two main ways: We propose (1) a max-margin extension of CG, and (2) a novel linear Gaussian Bayesian network that models feature relationships, which effectively makes use of sparse data by sharing statistical strength between parameters. We obtain significant improvements in the accuracy of RNA minimum free-energy pseudoknot-free secondary structure prediction when measured on a comprehensive set of 2518 RNA molecules with reference structures. Our parameters can be used in conjunction with software that predicts RNA secondary structures, RNA hybridization, or ensembles of structures. Our data, software, results, and parameter sets in various formats are freely available at http://www.cs.ubc.ca/labs/beta/Projects/RNA-Params. PMID:20940338

  20. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  1. Ultrasonic approach for formation of erbium oxide nanoparticles with variable geometries.

    PubMed

    Radziuk, Darya; Skirtach, André; Gessner, Andre; Kumke, Michael U; Zhang, Wei; Möhwald, Helmuth; Shchukin, Dmitry

    2011-12-06

    Ultrasound (20 kHz, 29 W·cm(-2)) is employed to form three types of erbium oxide nanoparticles in the presence of multiwalled carbon nanotubes as a template material in water. The nanoparticles are (i) erbium carboxioxide nanoparticles deposited on the external walls of multiwalled carbon nanotubes and Er(2)O(3) in the bulk with (ii) hexagonal and (iii) spherical geometries. Each type of ultrasonically formed nanoparticle reveals Er(3+) photoluminescence from crystal lattice. The main advantage of the erbium carboxioxide nanoparticles on the carbon nanotubes is the electromagnetic emission in the visible region, which is new and not examined up to the present date. On the other hand, the photoluminescence of hexagonal erbium oxide nanoparticles is long-lived (μs) and enables the higher energy transition ((4)S(3/2)-(4)I(15/2)), which is not observed for spherical nanoparticles. Our work is unique because it combines for the first time spectroscopy of Er(3+) electronic transitions in the host crystal lattices of nanoparticles with the geometry established by ultrasound in aqueous solution of carbon nanotubes employed as a template material. The work can be of great interest for "green" chemistry synthesis of photoluminescent nanoparticles in water. © 2011 American Chemical Society

  2. Influence of Subducting Plate Geometry on Upper Plate Deformation at Orogen Syntaxes: A Thermomechanical Modeling Approach

    NASA Astrophysics Data System (ADS)

    Nettesheim, Matthias; Ehlers, Todd; Whipp, David

    2016-04-01

    Syntaxes are short, convex bends in the otherwise slightly concave plate boundaries of subduction zones. These regions are of scientific interest because some syntaxes (e.g., the Himalaya or St. Elias region in Alaska) exhibit exceptionally rapid, focused rock uplift. These areas have led to a hypothesized connection between erosional and tectonic processes (top-down control), but have so far neglected the unique 3D geometry of the subducting plates at these locations. In this study, we contribute to this discussion by exploring the idea that subduction geometry may be sufficient to trigger focused tectonic uplift in the overriding plate (a bottom-up control). For this, we use a fully coupled 3D thermomechanical model that includes thermochronometric age prediction. The downgoing plate is approximated as spherical indenter of high rigidity, whereas both viscous and visco-plastic material properties are used to model deformation in the overriding plate. We also consider the influence of the curvature of the subduction zone and the ratio of subduction velocity to subduction zone advance. We evaluate these models with respect to their effect on the upper plate exhumation rates and localization. Results indicate that increasing curvature of the indenter and a stronger upper crust lead to more focused tectonic uplift, whereas slab advance causes the uplift focus to migrate and thus may hinder the emergence of a positive feedback.

  3. Investigation of Stent Implant Mechanics Using Linear Analytical and Computational Approach.

    PubMed

    Yang, Hua; Fortier, Aleksandra; Horne, Kyle; Mohammad, Atif; Banerjee, Subhash; Han, Hai-Chao

    2017-03-01

    Stent implants are essential in restoring normal blood flow in atherosclerotic arteries. Recent studies have shown high failure rates of stent implants in superficial femoral artery (SFA) as a result of dynamic loading environment imposed on the stent implants by the diseased arterial wall and turbulent blood flow. There are variety of stent designs and materials currently on the market however, there is no clear understanding if specific stent design is suitable with the material that is manufactured from and if this combination can sustain the life-cycle that the stent implants need to undergo once inside the artery. Lack of studies have been presented that relate stent mechanical properties with stent geometry and material used. This study presents linear theoretical and computational modeling approach that determines stent mechanical properties with effective stiffness of the deployed stent. Effective stiffness of the stent has been accurately derived based on stent structure design and loading in axial and radial directions. A rhombus stent structure was selected for this study due to its more common use and produced by main stream manufacturers. The derived theoretical model was validated using numerical finite element modeling approach. Results from this study can lead to preliminary insight towards understanding of stent deformation based on stent geometry, material properties and artery wall pressure; and how to carefully match stent's geometry with suitable material for long life cycle, increased strength, and reliable performance of stent implants.

  4. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  5. Flyby Geometry Optimization Tool

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2007-01-01

    The Flyby Geometry Optimization Tool is a computer program for computing trajectories and trajectory-altering impulsive maneuvers for spacecraft used in radio relay of scientific data to Earth from an exploratory airplane flying in the atmosphere of Mars.

  6. Aluminium in Biological Environments: A Computational Approach

    PubMed Central

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  7. Reduce beam hardening artifacts of polychromatic X-ray computed tomography by an iterative approximation approach.

    PubMed

    Shi, Hongli; Yang, Zhi; Luo, Shuqian

    2017-01-01

    The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam

  8. Source coding for transmission of reconstructed dynamic geometry: a rate-distortion-complexity analysis of different approaches

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael N.; Cesar, Pablo; Bulterman, Dick C. A.

    2014-09-01

    Live 3D reconstruction of a human as a 3D mesh with commodity electronics is becoming a reality. Immersive applications (i.e. cloud gaming, tele-presence) benefit from effective transmission of such content over a bandwidth limited link. In this paper we outline different approaches for compressing live reconstructed mesh geometry based on distributing mesh reconstruction functions between sender and receiver. We evaluate rate-performance-complexity of different configurations. First, we investigate 3D mesh compression methods (i.e. dynamic/static) from MPEG-4. Second, we evaluate the option of using octree based point cloud compression and receiver side surface reconstruction.

  9. What is Intrinsic Motivation? A Typology of Computational Approaches

    PubMed Central

    Oudeyer, Pierre-Yves; Kaplan, Frederic

    2007-01-01

    Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. PMID:18958277

  10. Investigation of voxel warping and energy mapping approaches for fast 4D Monte Carlo dose calculations in deformed geometries using VMC++

    NASA Astrophysics Data System (ADS)

    Heath, Emily; Tessier, Frederic; Kawrakow, Iwan

    2011-08-01

    A new deformable geometry class for the VMC++ Monte Carlo code was implemented based on the voxel warping method. Alternative geometries which use tetrahedral sub-elements were implemented and efficiency improvements investigated. A new energy mapping method, based on calculating the volume overlap between deformed reference dose grid and the target dose grid, was also developed. Dose calculations using both the voxel warping and energy mapping methods were compared in simple phantoms as well as a patient geometry. The new deformed geometry implementation in VMC++ increased calculation times by approximately a factor of 6 compared to standard VMC++ calculations in rectilinear geometries. However, the tetrahedron-based geometries were found to improve computational efficiency, relative to the dodecahedron-based geometry, by a factor of 2. When an exact transformation between the reference and target geometries was provided, the voxel and energy warping methods produced identical results. However, when the transformation is not exact, there were discrepancies in the energy deposited on the target geometry which lead to significant differences in the dose calculated by the two methods. Preliminary investigations indicate that these energy differences may correlate with registration errors; however, further work is needed to determine the usefulness of this metric for quantifying registration accuracy.

  11. Euclidean Geometry via Programming.

    ERIC Educational Resources Information Center

    Filimonov, Rossen; Kreith, Kurt

    1992-01-01

    Describes the Plane Geometry System computer software developed at the Educational Computer Systems laboratory in Sofia, Bulgaria. The system enables students to use the concept of "algorithm" to correspond to the process of "deductive proof" in the development of plane geometry. Provides an example of the software's capability…

  12. Analytic reconstruction approach for parallel translational computed tomography.

    PubMed

    Kong, Huihua; Yu, Hengyong

    2015-01-01

    To develop low-cost and low-dose computed tomography (CT) scanners for developing countries, recently a parallel translational computed tomography (PTCT) is proposed, and the source and detector are translated oppositely with respect to the imaging object without a slip-ring. In this paper, we develop an analytic filtered-backprojection (FBP)-type reconstruction algorithm for two dimensional (2D) fan-beam PTCT and extend it to three dimensional (3D) cone-beam geometry in a Feldkamp-type framework. Particularly, a weighting function is constructed to deal with data redundancy for multiple translations PTCT to eliminate image artifacts. Extensive numerical simulations are performed to validate and evaluate the proposed analytic reconstruction algorithms, and the results confirm their correctness and merits.

  13. A Comparative Study of Achievement in the Concepts of Fundamentals of Geometry Taught by Computer Managed Individualized Behavioral Objective Instructional Units Versus Lecture-Demonstration Methods of Instruction.

    ERIC Educational Resources Information Center

    Fisher, Merrill Edgar

    The purposes of this study were (1) to identify and compare the effect on student achievement of an individualized computer-managed geometry course, built on behavioral objectives, with traditional instructional methods; and (2) to identify how selected individual aptitudes interact with the two instructional modes. The subjects were…

  14. The Interpretative Flexibility, Instrumental Evolution, and Institutional Adoption of Mathematical Software in Educational Practice: The Examples of Computer Algebra and Dynamic Geometry

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2008-01-01

    This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and…

  15. The Interpretative Flexibility, Instrumental Evolution, and Institutional Adoption of Mathematical Software in Educational Practice: The Examples of Computer Algebra and Dynamic Geometry

    ERIC Educational Resources Information Center

    Ruthven, Kenneth

    2008-01-01

    This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and…

  16. Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    SciTech Connect

    Rodman, Leiba; Spitkovsky, Ilya M. E-mail: ilya@math.wm.edu; Szkoła, Arleta Weis, Stephan

    2016-01-15

    We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.

  17. Speckle interferometry from fiber-reinforced materials:A fractal geometry approach

    NASA Astrophysics Data System (ADS)

    Horta, J. M.; Castano, V. M.

    Speckle field studies were performed on fiber-modified Portland cement-based microconcrete beam models subjected to flexural loading. The resulting speckle fields were analyzed in terms of their associated mass fractal dimension by using digital image processing techniques. The experiments showed a change in the fractal dimension of the speckle fields as a function both of the loading and the structure of the microconcrete beams. A study was also conducted on the free-damped frequencies of the beams, which allowed to draw a fractal dimension vs. frequency plot on each loading cycle. These results allow to foresee the use of fractal geometry as a promising tool for better understanding the mechanical behavior of structures.

  18. A new simplifying approach to molecular geometry description: the vectorial bond-valence model.

    PubMed

    Harvey, Miguel Angel; Baggio, Sergio; Baggio, Ricardo

    2006-12-01

    A method to describe, analyze and even predict the coordination geometries of metal complexes is proposed, based on previous well established concepts such as bond valence and valence-shell electron-pair repulsion (VSEPR). The idea behind the method is the generalization of the scalar bond-valence concept into a vector quantity, the bond-valence vector (BVV), with the innovation that the multidentate ligands are represented by their resultant BVVs. Complex n-ligand coordination spheres (frequently indescribable at the atomic level) reduce to much simpler ones when analyzed in BVV space, with the bonus of a better applicability of the VSEPR predictions. The geometrical implications of the BVV description are analyzed for the cases of n=2 and 3 (n=number of ligands), and the validity of its predictions, checked for a large number of metal complexes.

  19. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  20. Critiquing: A Different Approach to Expert Computer Advice in Medicine

    PubMed Central

    Miller, Perry L.

    1984-01-01

    The traditional approach to computer-based advice in medicine has been to design systems which simulate a physician's decision process. This paper describes a different approach to computer advice in medicine: a critiquing approach. A critiquing system first asks how the physician is planning to manage his patient and then critiques that plan, discussing the advantages and disadvantages of the proposed approach, compared to other approaches which might be reasonable or preferred. Several critiquing systems are currently in different stages of implementation. The paper describes these systems and discusses the characteristics which make each domain suitable for critiquing. The critiquing approach may prove especially well-suited in domains where decisions involve a great deal of subjective judgement.

  1. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry

    PubMed Central

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623

  2. Bending and twisting the embryonic heart: a computational model for c-looping based on realistic geometry.

    PubMed

    Shi, Yunfei; Yao, Jiang; Young, Jonathan M; Fee, Judy A; Perucchio, Renato; Taber, Larry A

    2014-01-01

    The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study.

  3. A computational strategy for geometry optimization of ionic and covalent excited states, applied to butadiene and hexatriene.

    PubMed

    Boggio-Pasqua, Martial; Bearpark, Michael J; Klene, Michael; Robb, Michael A

    2004-05-01

    We propose a computational strategy that enables ionic and covalent pipi* excited states to be described in a balanced way. This strategy depends upon (1) the restricted active space self-consistent field method, in which the dynamic correlation between core sigma and valence pi electrons can be described by adding single sigma excitations to all pi configurations and (2) the use of a new conventional one-electron basis set specifically designed for the description of valence ionic states. Together, these provide excitation energies comparable with more accurate and expensive ab initio methods--e.g., multiconfigurational second-order perturbation theory and multireference configuration interaction. Moreover, our strategy also allows full optimization of excited-state geometries--including conical intersections between ionic and covalent excited states--to be routinely carried out, thanks to the availability of analytical energy gradients. The prototype systems studied are the cis and trans isomers of butadiene and hexatriene, for which the ground 1A(1/g), lower-lying dark (i.e., symmetry forbidden covalent) 2A(1/g) and spectroscopic 1B(2/u) (valence ionic) states were investigated. Copyright 2004 American Institute of Physics

  4. On the Geometry of the Berry-Robbins Approach to Spin-Statistics

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Nikolaos; Reyes-Lega, Andrés F.

    2010-07-01

    Within a geometric and algebraic framework, the structures which are related to the spin-statistics connection are discussed. A comparison with the Berry-Robbins approach is made. The underlying geometric structure constitutes an additional support for this approach. In our work, a geometric approach to quantum indistinguishability is introduced which allows the treatment of singlevaluedness of wave functions in a global, model independent way.

  5. Spatio-temporal EEG source localization using a three-dimensional subspace FINE approach in a realistic geometry inhomogeneous head model.

    PubMed

    Ding, Lei; He, Bin

    2006-09-01

    The subspace source localization approach, i.e., first principle vectors (FINE), is able to enhance the spatial resolvability and localization accuracy for closely-spaced neural sources from EEG and MEG measurements. Computer simulations were conducted to evaluate the performance of the FINE algorithm in an inhomogeneous realistic geometry head model under a variety of conditions. The source localization abilities of FINE were examined at different cortical regions and at different depths. The present computer simulation results indicate that FINE has enhanced source localization capability, as compared with MUSIC and RAP-MUSIC, when sources are closely spaced, highly noise-contaminated, or inter-correlated. The source localization accuracy of FINE is better, for closely-spaced sources, than MUSIC at various noise levels, i.e., signal-to-noise ratio (SNR) from 6 dB to 16 dB, and RAP-MUSIC at relatively low noise levels, i.e., 6 dB to 12 dB. The FINE approach has been further applied to localize brain sources of motor potentials, obtained during the finger tapping tasks in a human subject. The experimental results suggest that the detailed neural activity distribution could be revealed by FINE. The present study suggests that FINE provides enhanced performance in localizing multiple closely spaced, and inter-correlated sources under low SNR, and may become an important alternative to brain source localization from EEG or MEG.

  6. Spatio-temporal EEG Source Localization Using a Three-dimensional Subspace FINE Approach in a Realistic Geometry Inhomogeneous Head Model

    PubMed Central

    Ding, Lei; He, Bin

    2007-01-01

    The subspace source localization approach, i.e. first principle vectors (FINE), is able to enhance the spatial resolvability and localization accuracy for closely-spaced neural sources from EEG and MEG measurements. Computer simulations were conducted to evaluate the performance of the FINE algorithm in an inhomogeneous realistic geometry head model under a variety of conditions. The source localization abilities of FINE were examined at different cortical regions and at different depths. The present computer simulation results indicate that FINE has enhanced source localization capability, as compared with MUSIC and RAP-MUSIC, when sources are closely spaced, highly noise-contaminated, or inter-correlated. The source localization accuracy of FINE is better, for closely-spaced sources, than MUSIC at various noise levels, i.e. SNR from 6 dB to 16 dB, and RAP-MUSIC at relatively low noise levels, i.e. 6 dB to 12 dB. The FINE approach has been further applied to localize brain sources of motor potentials, obtained during the finger tapping tasks in a human subject. The experimental results suggest that the detailed neural activity distribution could be revealed by FINE. The present study suggests that FINE provides enhanced performance in localizing multiple closely-spaced, and inter-correlated sources under low signal-to-noise ratio, and may become an important alternative to brain source localization from EEG or MEG. PMID:16941829

  7. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking.

  8. Comparison of kinetic and extended magnetohydrodynamics computational models for the linear ion temperature gradient instability in slab geometry

    NASA Astrophysics Data System (ADS)

    Schnack, D. D.; Cheng, J.; Barnes, D. C.; Parker, S. E.

    2013-06-01

    We perform linear stability studies of the ion temperature gradient (ITG) instability in unsheared slab geometry using kinetic and extended magnetohydrodynamics (MHD) models, in the regime k∥/k⊥≪1. The ITG is a parallel (to B) sound wave that may be destabilized by finite ion Larmor radius (FLR) effects in the presence of a gradient in the equilibrium ion temperature. The ITG is stable in both ideal and resistive MHD; for a given temperature scale length LTi0, instability requires that either k⊥ρi or ρi/LTi0 be sufficiently large. Kinetic models capture FLR effects to all orders in either parameter. In the extended MHD model, these effects are captured only to lowest order by means of the Braginskii ion gyro-viscous stress tensor and the ion diamagnetic heat flux. We present the linear electrostatic dispersion relations for the ITG for both kinetic Vlasov and extended MHD (two-fluid) models in the local approximation. In the low frequency fluid regime, these reduce to the same cubic equation for the complex eigenvalue ω =ωr+iγ. An explicit solution is derived for the growth rate and real frequency in this regime. These are found to depend on a single non-dimensional parameter. We also compute the eigenvalues and the eigenfunctions with the extended MHD code NIMROD, and a hybrid kinetic δf code that assumes six-dimensional Vlasov ions and isothermal fluid electrons, as functions of k⊥ρi and ρi/LTi0 using a spatially dependent equilibrium. These solutions are compared with each other, and with the predictions of the local kinetic and fluid dispersion relations. Kinetic and fluid calculations agree well at and near the marginal stability point, but diverge as k⊥ρi or ρi/LTi0 increases. There is good qualitative agreement between the models for the shape of the unstable global eigenfunction for LTi0/ρi=30 and 20. The results quantify how far fluid calculations can be extended accurately into the kinetic regime. We conclude that for the linear ITG

  9. A Modular Approach to Building Adult Computing Competencies: The Desktop Computer Series.

    ERIC Educational Resources Information Center

    Joseph, John J.

    The Fox Valley Technical Institute's approach to teaching adults about computers is based on three underlying premises: there is a widespread need for adult education related to desktop computers; the needs are not the same for everyone; and to be effective, a program that addresses these needs must be flexible, pertinent, and current. (Desktop is…

  10. Lithologically controlled strength variation and the Himalayan megathrust geometry: an analogue modeling approach

    NASA Astrophysics Data System (ADS)

    Ghosh, Subhajit; Das, Animesh; Bose, Santanu; Mandal, Nibir

    2017-04-01

    A moment magnitude (Mw) 7.8 earthquake associated with a Mw 7.3 aftershock hit the Gorkha region near Kathmandu, Nepal on April 25, 2015. The rupture propagated eastward for about 140 km and caused thousands of deaths. The focal mechanism of the Gorkha earthquake shows thrust sense over the mid-crustal steeply dipping ramp on the basal décollement known as the Main Himalayan Thrust (MHT). The MHT is the largest and fastest slipping continental megathrust over which the southward tapering Himalayan thrust wedge similar to the accretionary wedges is moving. The MHT ramps up to the surface beneath the Siwalik group of rocks as the Main Frontal Thrust (MFT). Below the MFT the basal décollement is flat until it reaches the mid-crustal ramp ( 20°) below the Himalayan klippen and then again it becomes flat. This geometry of the décollement is consistent with the balanced cross sections, microseismic data, magnetotelluric images, INDEPTH seismic reflection profile, present day stress distribution and fits well with the prominent topographic break (physiographic transition) in the Lesser Himalaya. Lithologically stratified sedimentary sequences in the upper crust are mechanically heterogeneous. It has been long known that the mechanical properties of the stratigraphic succession influence the resultant structural architecture of the fold and thrust belts. The rheologically weak stratigraphic horizon generally contains the basal décollement due to its relatively low frictional strength. Hence, any vertical or lateral change in frictional property may control the effective strength and the positions of the décollement in space. In the present study, we used non-cohesive sand and mica dust layers as analogue materials for simulating the strong and weak layers respectively in the sandbox apparatus. Experimental results with relatively high basal friction (μ=0.46) show that such a weak horizon at a shallow depth perturbs the sequential thrust progression, and forces a

  11. Higher spin approaches to quantum field theory and (psuedo)-Riemannian geometries

    NASA Astrophysics Data System (ADS)

    Hallowell, Karl Evan

    In this thesis, we study a number of higher spin quantum field theories and some of their algebraic and geometric consequences. These theories apply mostly either over constant curvature or more generally symmetric pseudo-Riemannian manifolds. The first part of this dissertation covers a superalgebra coming from a family of particle models over symmetric spaces. These theories are novel in that the symmetries of the (super)algebra osp( Q|2p) are larger and more elaborate than traditional symmetries. We construct useful (super)algebras related to and generalizing old work by Lichnerowicz and describe their role in developing the geometry of massless models with osp(Q|2 p) symmetry. The result is two practical applications of these (super)algebras: (1) a lunch more concise description of a family of higher spin quantum field theories; and (2) an interesting algebraic probe of underlying background geometries. We also consider massive models over constant curvature spaces. We use a radial dimensional reduction process which converts massless models into massive ones over a lower dimensional space. In our case, we take from the family of theories above the particular free, massless model over flat space associated with sp(2, R ) and derive a massive model. In the process, we develop a novel associative algebra, which is a deformation of the original differential operator algebra associated with the sp(2, R ) model. This algebra is interesting in its own right since its operators realize the representation structure of the sp(2, R ) group. The massive model also has implications for a sequence of unusual, "partially massless" theories. The derivation illuminates how reduced degrees of freedom become manifest in these particular models. Finally, we study a Yang-Mills model using an on-shell Poincare Yang-Mills twist of the Maxwell complex along with a non-minimal coupling. This is a special, higher spin case of a quantum field theory called a Yang-Mills detour complex

  12. The method of characteristics and computational fluid dynamics applied to the prediction of underexpanded jet flows in annular geometry

    NASA Astrophysics Data System (ADS)

    Kim, Sangwon

    2005-11-01

    High pressure (3.4 MPa) injection from a shroud valve can improve natural gas engine efficiency by enhancing fuel-air mixing. Since the fuel jet issuing from the shroud valve has a nearly annular jet flow configuration, it is necessary to analyze the annular jet flow to understand the fuel jet behavior in the mixing process and to improve the shroud design for better mixing. The method of characteristics (MOC) was used as the primary modeling algorithm in this work and Computational Fluid Dynamics (CFD) was used primarily to validate the MOC results. A consistent process for dealing with the coalescence of compression characteristic lines into a shock wave during the MOC computation was developed. By the application of shock polar in the pressure-flow angle plane to the incident shock wave for an axisymmetric underexpanded jet and the comparison with the triple point location found in experimental results, it was found that, in the static pressure ratios of 2--50, a triple point of the jet was located at the point where the flow angle after the incident shock became -5° relative to the axis and this point was situated between the von Neumann and detachment criteria on the incident shock. MOC computations of the jet flow with annular geometry were performed for pressure ratios of 10 and 20 with rannulus = 10--50 units, Deltar = 2 units. In this pressure ratio range, the MOC results did not predict a Mach disc in the core flow of the annular jet, but did indicate the formation of a Mach disc where the jet meets the axis of symmetry. The MOC results display the annular jet configurations clearly. Three types of nozzles for application to gas injectors (convergent-divergent nozzle, conical nozzle, and aerospike nozzle) were designed using the MOC and evaluated in on- and off-design conditions using CFD. The average axial momentum per unit mass was improved by 17 to 24% and the average kinetic energy per unit fuel mass was improved by 30 to 80% compared with a standard

  13. Antisolvent crystallization approach to construction of CuI superstructures with defined geometries.

    PubMed

    Kozhummal, Rajeevan; Yang, Yang; Güder, Firat; Küçükbayrak, Umut M; Zacharias, Margit

    2013-03-26

    A facile high-yield production of cuprous iodide (CuI) superstructures is reported by antisolvent crystallization using acetonitrile/water as a solvent/antisolvent couple under ambient conditions. In the presence of trace water, the metastable water droplets act as templates to induce the precipitation of hollow spherical CuI superstructures consisting of orderly aligned building blocks after drop coating. With water in excess in the mixed solution, an instant precipitation of CuI random aggregates takes place due to rapid crystal growth via ion-by-ion attachment induced by a strong antisolvent effect. However, this uncontrolled process can be modified by adding polymer polyvinyl pyrrolidone (PVP) in water to restrict the size of initially formed CuI crystal nuclei through the effective coordination effect of PVP. As a result, CuI superstructures with a cuboid geometry are constructed by gradual self-assembly of the small CuI crystals via oriented attachment. The precipitated CuI superstructures have been used as competent adsorbents to remove organic dyes from the water due to their mesocrystal feature. Besides, the CuI superstructures have been applied either as a self-sacrificial template or only as a structuring template for the flexible design of other porous materials such as CuO and TiO2. This system provides an ideal platform to simultaneously investigate the superstructure formation enforced by antisolvent crystallization with and without organic additives.

  14. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Stanescu, D.; Hussaini, M. Y.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far field. The effects of non-uniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing. 0 2002 Elsevier Science Ltd. All rights reserved.

  15. Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  16. Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach

    NASA Technical Reports Server (NTRS)

    Stanescu, D.; Hussaini, M. Y.; Farassat, F.

    2003-01-01

    The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.

  17. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation

    NASA Astrophysics Data System (ADS)

    Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina

    2016-06-01

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  18. Design of specially adapted reactive coordinates to economically compute potential and kinetic energy operators including geometry relaxation.

    PubMed

    Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina

    2016-06-21

    Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.

  19. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  20. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  1. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  3. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  4. Direct approach to Gaussian measurement based quantum computation

    NASA Astrophysics Data System (ADS)

    Ferrini, G.; Roslund, J.; Arzani, F.; Fabre, C.; Treps, N.

    2016-12-01

    In this work we introduce an original scheme for measurement based quantum computation in continuous variables. Our approach does not necessarily rely on the use of ancillary cluster states to achieve its aim, but rather on the detection of a resource state in a suitable mode basis followed by digital postprocessing, and involves an optimization of the adjustable experimental parameters. After introducing the general method, we present some examples of application to simple specific computations.

  5. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  6. Computational molecular biology approaches to ligand-target interactions

    PubMed Central

    Lupieri, Paola; Nguyen, Chuong Ha Hung; Bafghi, Zhaleh Ghaemi; Giorgetti, Alejandro; Carloni, Paolo

    2009-01-01

    Binding of small molecules to their targets triggers complex pathways. Computational approaches are keys for predictions of the molecular events involved in such cascades. Here we review current efforts at characterizing the molecular determinants in the largest membrane-bound receptor family, the G-protein-coupled receptors (GPCRs). We focus on odorant receptors, which constitute more than half GPCRs. The work presented in this review uncovers structural and energetic aspects of components of the cellular cascade. Finally, a computational approach in the context of radioactive boron-based antitumoral therapies is briefly described. PMID:20119480

  7. Fractal geometry as a new approach for proving nanosimilarity: a reflection note.

    PubMed

    Demetzos, Costas; Pippa, Natassa

    2015-04-10

    Nanosimilars are considered as new medicinal outcomes combining the generic drugs and the nanocarrier as an innovative excipient, in order to evaluate them as final products. They belong to the grey area - concerning the evaluation process - between generic drugs and biosimilar medicinal products. Generic drugs are well documented and a huge number of them are in market, replacing effectively the off-patent drugs. The scientific approach for releasing them to the market is based on bioequivalence studies, which are well documented and accepted by the regulatory agencies. On the other hand, the structural complexity of biological/biotechnology-derived products demands a new approach for the approval process taking into consideration that bioequivalence studies are not considered as sufficient as in generic drugs, and new clinical trials are needed to support their approval process of the product to the market. In proportion, due to technological complexity of nanomedicines, the approaches for proving the statistical identity or the similarity for generic and biosimilar products, respectively, with those of prototypes, are not considered as effective for nanosimilar products. The aim of this note is to propose a complementary approach which can provide realistic evidences concerning the nanosimilarity, based on fractal analysis. This approach is well fit with the structural complexity of nanomedicines and smooths the difficulties for proving the similarity between off-patent and nanosimilar products. Fractal analysis could be considered as the approach that completely characterizes the physicochemical/morphological characteristics of nanosimilar products and could be proposed as a start point for a deep discussion on nanosimilarity.

  8. The dependence of computed tomography number to relative electron density conversion on phantom geometry and its impact on planned dose.

    PubMed

    Inness, Emma K; Moutrie, Vaughan; Charles, Paul H

    2014-06-01

    A computed tomography number to relative electron density (CT-RED) calibration is performed when commissioning a radiotherapy CT scanner by imaging a calibration phantom with inserts of specified RED and recording the CT number displayed. In this work, CT-RED calibrations were generated using several commercially available phantoms to observe the effect of phantom geometry on conversion to electron density and, ultimately, the dose calculation in a treatment planning system. Using an anthropomorphic phantom as a gold standard, the CT number of a material was found to depend strongly on the amount and type of scattering material surrounding the volume of interest, with the largest variation observed for the highest density material tested, cortical bone. Cortical bone gave a maximum CT number difference of 1,110 when a cylindrical insert of diameter 28 mm scanned free in air was compared to that in the form of a 30 × 30 cm(2) slab. The effect of using each CT-RED calibration on planned dose to a patient was quantified using a commercially available treatment planning system. When all calibrations were compared to the anthropomorphic calibration, the largest percentage dose difference was 4.2 % which occurred when the CT-RED calibration curve was acquired with heterogeneity inserts removed from the phantom and scanned free in air. The maximum dose difference observed between two dedicated CT-RED phantoms was ±2.1 %. A phantom that is to be used for CT-RED calibrations must have sufficient water equivalent scattering material surrounding the heterogeneous objects that are to be used for calibration.

  9. Ring polymer chains confined in a slit geometry of two parallel walls: the massive field theory approach

    NASA Astrophysics Data System (ADS)

    Usatenko, Z.; Halun, J.

    2017-01-01

    The investigation of a dilute solution of phantom ideal ring polymer chains confined in a slit geometry of two parallel repulsive walls, two inert walls, and for the mixed case of one inert and the other one repulsive wall, was performed. Taking into account the well known correspondence between the field theoretical {φ4} O(n)-vector model in the limit n\\to 0 and the behaviour of long-flexible polymer chains in a good solvent, the investigation of a dilute solution of long-flexible ring polymer chains with the excluded volume interaction (EVI) confined in a slit geometry of two parallel repulsive walls was performed in the framework of the massive field theory approach at fixed space dimensions d  =  3 up to one-loop order. For all the above mentioned cases, the correspondent depletion interaction potentials, the depletion forces and the forces which exert the phantom ideal ring polymers and the ring polymers with the EVI on the walls were calculated, respectively. The obtained results indicate that the phantom ideal ring polymer chains and the ring polymer chains with the EVI due to the complexity of chain topology and because of the entropical reason demonstrate completely different behaviour in confined geometries than linear polymer chains. For example, the phantom ideal ring polymers prefer to escape from the space not only between two repulsive walls but also in the case of two inert walls, which leads to the attractive depletion forces. The ring polymer chains with less complex knot types (with the bigger radius of gyration) in a ring topology in the wide slit region exert higher forces on the confining repulsive walls. The depletion force in the case of mixed boundary conditions becomes repulsive in contrast to the case of linear polymer chains.

  10. Diversifying Our Perspectives on Mathematics about Space and Geometry: An Ecocultural Approach

    ERIC Educational Resources Information Center

    Owens, Kay

    2014-01-01

    School mathematics tends to have developed from the major cultures of Asia, the Mediterranean and Europe. However, indigenous cultures in particular may have distinctly different systematic ways of referring to space and thinking mathematically about spatial activity. Their approaches are based on the close link between the environment and…

  11. Connecting Geometry and Chemistry: A Three-Step Approach to Three-Dimensional Thinking

    ERIC Educational Resources Information Center

    Donaghy, Kelley J.; Saxton, Kathleen J.

    2012-01-01

    A three-step active-learning approach is described to enhance the spatial abilities of general chemistry students with respect to three-dimensional molecular drawing and visualization. These activities are used in a medium-sized lecture hall with approximately 150 students in the first semester of the general chemistry course. The first activity…

  12. Connecting Geometry and Chemistry: A Three-Step Approach to Three-Dimensional Thinking

    ERIC Educational Resources Information Center

    Donaghy, Kelley J.; Saxton, Kathleen J.

    2012-01-01

    A three-step active-learning approach is described to enhance the spatial abilities of general chemistry students with respect to three-dimensional molecular drawing and visualization. These activities are used in a medium-sized lecture hall with approximately 150 students in the first semester of the general chemistry course. The first activity…

  13. Diversifying Our Perspectives on Mathematics about Space and Geometry: An Ecocultural Approach

    ERIC Educational Resources Information Center

    Owens, Kay

    2014-01-01

    School mathematics tends to have developed from the major cultures of Asia, the Mediterranean and Europe. However, indigenous cultures in particular may have distinctly different systematic ways of referring to space and thinking mathematically about spatial activity. Their approaches are based on the close link between the environment and…

  14. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    SciTech Connect

    Kim, K; Kim, D; Kim, T; Kang, S; Cho, M; Shin, D; Suh, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array which have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A

  15. Computation of molecular parity violation using the coupled-cluster linear response approach

    NASA Astrophysics Data System (ADS)

    Horný, Ľuboš; Quack, Martin

    2015-07-01

    In memoriam, Nicholas C. Handy. We report the implementation of a coupled-cluster linear response approach for the computation of molecular parity violation (in the framework of the PSI3 code, in particular). The approach is applied first to molecules such as hydrogen peroxide (HOOH), hydrogen disulfide (HSSH) and dichlorinedioxide (ClOOCl), which have been studied previously. The importance of including correlation is demonstrated for these examples, also including selected variations of geometry providing parity violation as a function of torsional angles. For the substituted allenes, 1,3 difluoroallene (CHF=C=CHF), 1,fluoro,3 chloroallene (CHF=C=CHCl) and 1,3 dichloroallene (CHCl=C=CHCl), we find that in particular the last molecule may be a suitable candidate for the experimental study of molecular parity violation.

  16. General Approach in Computing Sums of Products of Binary Sequences

    DTIC Science & Technology

    2011-12-08

    General Approach in Computing Sums of Products of Binary Sequences E. Kiliç1, P. Stănică2 1TOBB Economics and Technology University, Mathematics...pstanica@nps.edu December 8, 2011 Abstract In this paper we find a general approach to find closed forms of sums of products of arbitrary sequences ...satisfying the same recurrence with different initial conditions. We apply successfully our technique to sums of products of such sequences with indices in

  17. Novel Approaches to Quantum Computation Using Solid State Qubits

    DTIC Science & Technology

    2007-12-31

    Han, A scheme for the teleportation of multiqubit quantum information via the control of many agents in a network, submitted to Phys. Lett. A, 343...approach, Phys. Rev. B 70, 094513 (2004). 22. C.-P. Yang, S.-I. Chu, and S. Han, Efficient many party controlled teleportation of multiqubit quantum ...June 1, 2001- September 30, 2007 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER "Novel Approaches to Quantum Computation Using Solid State Qubits" F49620

  18. WebMTA: a web-interface for ab initio geometry optimization of large molecules using molecular tailoring approach.

    PubMed

    Kavathekar, Ritwik; Khire, Subodh; Ganesh, V; Rahalkar, Anuja P; Gadre, Shridhar R

    2009-05-01

    A web-interface for geometry optimization of large molecules using a linear scaling method, i.e., cardinality guided molecular tailoring approach (CG-MTA), is presented. CG-MTA is a cut-and-stitch, fragmentation-based method developed in our laboratory, for linear scaling of conventional ab initio techniques. This interface provides limited access to CG-MTA-enabled GAMESS. It can be used to obtain fragmentation schemes for a given spatially extended molecule depending on the maximum allowed fragment size and minimum cut radius values provided by the user. Currently, we support submission of single point or geometry optimization jobs at Hartree-Fock and density functional theory levels of theory for systems containing between 80 to 200 first row atoms and comprising up to 1000 basis functions. The graphical user interface is built using HTML and Python at the back end. The back end farms out the jobs on an in-house Linux-based cluster running on Pentium-4 Class or higher machines using an @Home-based parallelization scheme (http://chem.unipune.ernet.in/ approximately tcg/mtaweb/).

  19. Creation of an idealized nasopharynx geometry for accurate computational fluid dynamics simulations of nasal airflow in patient-specific models lacking the nasopharynx anatomy.

    PubMed

    A T Borojeni, Azadeh; Frank-Ito, Dennis O; Kimbell, Julia S; Rhee, John S; Garcia, Guilherme J M

    2016-08-15

    Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the cone beam CT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically accurate models of the nasopharynx created from 30 CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 nasal airway obstruction patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx.

  20. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  1. Predicting relative permeability from water retention: A direct approach based on fractal geometry

    NASA Astrophysics Data System (ADS)

    Cihan, Abdullah; Tyner, John S.; Perfect, Edmund

    2009-04-01

    Commonly, a soil's relative permeability curve is predicted from its measured water retention curve by fitting equations that share parameters between the two curves (e.g., Brooks/Corey-Mualem and van Genuchten-Mualem). We present a new approach to predict relative permeability by direct application of measured soil water retention data without any fitting procedures. The new relative permeability model, derived from a probabilistic fractal approach, appears in series form as a function of suction and the incremental change in water content. This discrete approach describes the drained pore space and permeability at different suctions incorporating the effects of both pore size distribution and connectivity among water-filled pores. We compared the new model performance predicting relative permeability to that of the van Genuchten-Mualem (VG-M) model for 35 paired data sets from the Unsaturated Soil hydraulic Database (UNSODA) and five other previously published data sets. At the 5% level of significance, the new method predicts relative permeabilities from the UNSODA database significantly better (mean logarithmic root-mean-square error, LRMSE = 0.813) than the VG-M model (LRMSE = 1.555). Each prediction of relative permeability from the five other previously published data sets was also significantly better.

  2. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  3. Non-invasive Assessment of Lower Limb Geometry and Strength Using Hip Structural Analysis and Peripheral Quantitative Computed Tomography: A Population-Based Comparison.

    PubMed

    Litwic, A E; Clynes, M; Denison, H J; Jameson, K A; Edwards, M H; Sayer, A A; Taylor, P; Cooper, C; Dennison, E M

    2016-02-01

    Hip fracture is the most significant complication of osteoporosis in terms of mortality, long-term disability and decreased quality of life. In the recent years, different techniques have been developed to assess lower limb strength and ultimately fracture risk. Here we examine relationships between two measures of lower limb bone geometry and strength; proximal femoral geometry and tibial peripheral quantitative computed tomography. We studied a sample of 431 women and 488 men aged in the range 59-71 years. The hip structural analysis (HSA) programme was employed to measure the structural geometry of the left hip for each DXA scan obtained using a Hologic QDR 4500 instrument while pQCT measurements of the tibia were obtained using a Stratec 2000 instrument in the same population. We observed strong sex differences in proximal femoral geometry at the narrow neck, intertrochanteric and femoral shaft regions. There were significant (p < 0.001) associations between pQCT-derived measures of bone geometry (tibial width; endocortical diameter and cortical thickness) and bone strength (strength strain index) with each corresponding HSA variable (all p < 0.001) in both men and women. These results demonstrate strong correlations between two different methods of assessment of lower limb bone strength: HSA and pQCT. Validation in prospective cohorts to study associations of each with incident fracture is now indicated.

  4. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  5. A Unitifed Computational Approach to Oxide Aging Processes

    SciTech Connect

    Bowman, D.J.; Fleetwood, D.M.; Hjalmarson, H.P.; Schultz, P.A.

    1999-01-27

    In this paper we describe a unified, hierarchical computational approach to aging and reliability problems caused by materials changes in the oxide layers of Si-based microelectronic devices. We apply this method to a particular low-dose-rate radiation effects problem

  6. Conformational dynamics of proanthocyanidins: physical and computational approaches

    Treesearch

    Fred L. Tobiason; Richard W. Hemingway; T. Hatano

    1998-01-01

    The interaction of plant polyphenols with proteins accounts for a good part of their commercial (e.g., leather manufacture) and biological (e.g., antimicrobial activity) significance. The interplay between observations of physical data such as crystal structure, NMR analyses, and time-resolved fluorescence with results of computational chemistry approaches has been...

  7. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  8. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  9. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    PubMed

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  10. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE PAGES

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...

    2017-10-01

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  11. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; Bhalla, Amneet Pal Singh

    2017-10-01

    We present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid-structure interface are avoided and far-field (smooth) velocity and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torques through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.

  12. Cloud Computing – A Unified Approach for Surveillance Issues

    NASA Astrophysics Data System (ADS)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  13. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  14. Ab initio and density functional computations of the vibrational spectrum, molecular geometry and some molecular properties of the antidepressant drug sertraline (Zoloft) hydrochloride.

    PubMed

    Sagdinc, Seda; Kandemirli, Fatma; Bayari, Sevgi Haman

    2007-02-01

    Sertraline hydrochloride is a highly potent and selective inhibitor of serotonin (5HT). It is a basic compound of pharmaceutical application for antidepressant treatment (brand name: Zoloft). Ab initio and density functional computations of the vibrational (IR) spectrum, the molecular geometry, the atomic charges and polarizabilities were carried out. The infrared spectrum of sertraline is recorded in the solid state. The observed IR wave numbers were analysed in light of the computed vibrational spectrum. On the basis of the comparison between calculated and experimental results and the comparison with related molecules, assignments of fundamental vibrational modes are examined. The X-ray geometry and experimental frequencies are compared with the results of our theoretical calculations.

  15. Numerical approach to reproduce instabilities of partial cavitation in a Venturi 8° geometry

    NASA Astrophysics Data System (ADS)

    Charriere, Boris; Goncalves, Eric

    2016-11-01

    Unsteady partial cavitation is mainly formed by an attached cavity which present periodic oscillations. Under certain conditions, the instabilities are characterized by the formation of vapour clouds, convected downstream the cavity and which collapse in higher pressure region. In order to gain a better understanding of the complex physics involved, many experimental and numerical studies have been carried out. These identified two main mechanisms responsible for the break-off cycles. The development of a liquid re-entrant jet is the most common type of instabilities, but more recently, the role of pressure waves created by the cloud collapses has been highlighted. This paper presents a one-fluid compressible Reynolds- Averaged NavierStokes (RANS) solver closed by two different equations of state (EOS) for the mixture. Based on experimental data, we investigate the ability for our simulations to reproduce the instablities of a self-sustained oscillating cavitation pocket. Two cavitation models are firstly compared. The importance of considering a non-equilibrium state for the vapour phase is also exhibited. To finish, the role played by the added transport equation to compute void ratio is emphasised. In case of partially cavitating flows with detached cavitation clouds, the reproduction of convective mechanisms is clearly improved.

  16. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  17. Computational intelligence approaches for pattern discovery in biological systems.

    PubMed

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  18. Topological expansion of the β-ensemble model and quantum algebraic geometry in the sectorwise approach

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Eynard, B.; Marchal, O.

    2011-02-01

    We construct the solution of the loop equations of the β-ensemble model in a form analogous to the solution in the case of the Hermitian matrices β = 1. The solution for β = 1 is expressed in terms of the algebraic spectral curve given by y2 = U(x). The spectral curve for arbitrary β converts into the Schrödinger equation (ħ∂)2 - U(x) ψ(x) = 0, where ħ ∝ {{( {{{sqrt β - 1} {sqrt β }}} {{{sqrt β - 1} {sqrt β }}} {sqrt β }}} )} N}}. The basic ingredients of the method based on the algebraic solution retain their meaning, but we use an alternative approach to construct a solution of the loop equations in which the resolvents are given separately in each sector. Although this approach turns out to be more involved technically, it allows consistently defining the B-cycle structure for constructing the quantum algebraic curve (a D-module of the form y2 - U(x), where [y, x] = ħ) and explicitly writing the correlation functions and the corresponding symplectic invariants Fh or the terms of the free energy in an 1/N2-expansion at arbitrary ħ. The set of "flat" coordinates includes the potential times tk and the occupation numbers tilde \\varepsilon _α . We define and investigate the properties of the A- and B-cycles, forms of the first, second, and third kinds, and the Riemann bilinear identities. These identities allow finding the singular part of mathcal{F}_0 , which depends only on tilde \\varepsilon _α.

  19. An analytical approach to bistable biological circuit discrimination using real algebraic geometry

    PubMed Central

    Siegal-Gaskins, Dan; Franco, Elisa; Zhou, Tiffany; Murray, Richard M.

    2015-01-01

    Biomolecular circuits with two distinct and stable steady states have been identified as essential components in a wide range of biological networks, with a variety of mechanisms and topologies giving rise to their important bistable property. Understanding the differences between circuit implementations is an important question, particularly for the synthetic biologist faced with determining which bistable circuit design out of many is best for their specific application. In this work we explore the applicability of Sturm's theorem—a tool from nineteenth-century real algebraic geometry—to comparing ‘functionally equivalent’ bistable circuits without the need for numerical simulation. We first consider two genetic toggle variants and two different positive feedback circuits, and show how specific topological properties present in each type of circuit can serve to increase the size of the regions of parameter space in which they function as switches. We then demonstrate that a single competitive monomeric activator added to a purely monomeric (and otherwise monostable) mutual repressor circuit is sufficient for bistability. Finally, we compare our approach with the Routh–Hurwitz method and derive consistent, yet more powerful, parametric conditions. The predictive power and ease of use of Sturm's theorem demonstrated in this work suggest that algebraic geometric techniques may be underused in biomolecular circuit analysis. PMID:26109633

  20. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  1. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  2. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  3. Parallel computing alters approaches, raises integration challenges in reservoir modeling

    SciTech Connect

    Shiralkar, G.S.; Volz, R.F.; Stephenson, R.E.; Valle, M.J.; Hird, K.B.

    1996-05-20

    Parallel computing is emerging as an important force in reservoir characterization, with the potential of altering the way one approaches reservoir modeling. In just hours, it is possible to routinely simulate the fluid flow in reservoir models 10 times larger than the largest studies conducted previously within Amoco. Although parallel computing provides solutions to reservoir characterization problems not possible in the past, such a state-of-the-art technology also raises several new problems, including the need to handle large amounts of data and data integration. This paper presents a reservoir study recently conducted by Amoco providing a showcase for these emerging technologies.

  4. The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.

    ERIC Educational Resources Information Center

    Bronson, Richard

    1986-01-01

    Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)

  5. Dynamics and friction drag behavior of viscoelastic flows in complex geometries: A multiscale simulation approach

    NASA Astrophysics Data System (ADS)

    Koppol, Anantha Padmanabha Rao

    Flows of viscoelastic polymeric fluids are of great fundamental and practical interest as polymeric materials for commodity and value-added products are processed typically in a fluid state. The nonlinear coupling between fluid motion and microstructure, which results in highly non-Newtonian theology, memory/relaxation and normal stress development or tension along streamlines, greatly complicates the analysis, design and control of such flows. This has posed tremendous challenges to researchers engaged in developing first principles models and simulations that can accurately and robustly predict the dynamical behavior of polymeric flows. Despite this, the past two decades have witnessed several significant advances towards accomplishing this goal. Yet a problem of fundamental and great pragmatic interest has defied solution to years of ardent research by several groups, namely the relationship between friction drag and flow rate in inertialess flows of highly elastic polymer solutions in complex kinematics flows. First principles-based solution of this long-standing problem in non-Newtonian fluid mechanics is the goal of this research. To achieve our objective, it is essential to develop the capability to perform large-scale multiscale simulations, which integrate continuum-level finite element solvers for the conservation of mass and momentum with fast integrators of stochastic differential equations that describe the evolution of polymer configuration. Hence, in this research we have focused our attention on development of a parallel, multiscale simulation algorithm that is capable of robustly and efficiently simulating complex kinematics flows of dilute polymeric solutions using the first principles based bead-spring chain description of the polymer molecules. The fidelity and computational efficiency of the algorithm has been demonstrated via three benchmark flow problems, namely, the plane Couette flow, the Poiseuille flow and the 4:1:4 axisymmetric

  6. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium

    NASA Astrophysics Data System (ADS)

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-01

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91 kJ/mol and 114.32 kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99 kJ/mol and 117.08 kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29 Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22 Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions

  7. Computational study of influence of diffuse basis functions on geometry optimization and spectroscopic properties of losartan potassium.

    PubMed

    Mizera, Mikołaj; Lewadowska, Kornelia; Talaczyńska, Alicja; Cielecka-Piontek, Judyta

    2015-02-25

    The work was aimed at investigating the influence of diffusion of basis functions on the geometry optimization of molecule of losartan in acidic and salt form. Spectroscopic properties of losartan potassium were also calculated and compared with experiment. Density functional theory method with various basis sets: 6-31G(d,p) and its diffused variations 6-31G(d,p)+ and 6-31G(d,p)++ was used. Application of diffuse basis functions in geometry optimization resulted in significant change of total molecule energy. Total molecule energy of losartan potassium decreased by 112.91kJ/mol and 114.32kJ/mol for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets, respectively. Almost the same decrease was observed for losartan: 114.99kJ/mol and 117.08kJ/mol respectively for 6-31G(d,p)+ and 6-31G(d,p)++ basis sets. Further investigation showed significant difference within geometries of losartan potassium optimized with investigated basis sets. Application of diffused basis functions resulted in average 1.29Å difference in relative position between corresponding atoms of three obtained geometries. Similar study taken on losartan resulted in only average 0.22Å of dislocation. Extensive analysis of geometry changes in molecules obtained with diffused and non-diffuse basis functions was carried out in order to elucidate observed changes. The analysis was supported by electrostatic potential maps and calculation of natural atomic charges. UV, FT-IR and Raman spectra of losartan potassium were calculated and compared with experimental results. No crucial differences between Raman spectra obtained with different basis sets were observed. However, FT-IR spectra of geometry of losartan potassium optimized with 6-31G(d,p)++ basis set resulted in 40% better correlation with experimental FT-IR spectra than FT-IR calculated with geometry optimized with 6-31G(d,p) basis set. Therefore, it is highly advisable to optimize geometry of molecules with ionic interactions using diffuse basis functions when

  8. Protein Engineering by Combined Computational and In Vitro Evolution Approaches.

    PubMed

    Rosenfeld, Lior; Heyne, Michael; Shifman, Julia M; Papo, Niv

    2016-05-01

    Two alternative strategies are commonly used to study protein-protein interactions (PPIs) and to engineer protein-based inhibitors. In one approach, binders are selected experimentally from combinatorial libraries of protein mutants that are displayed on a cell surface. In the other approach, computational modeling is used to explore an astronomically large number of protein sequences to select a small number of sequences for experimental testing. While both approaches have some limitations, their combination produces superior results in various protein engineering applications. Such applications include the design of novel binders and inhibitors, the enhancement of affinity and specificity, and the mapping of binding epitopes. The combination of these approaches also aids in the understanding of the specificity profiles of various PPIs.

  9. Computational modeling approaches to the dynamics of oncolytic viruses

    PubMed Central

    Wodarz, Dominik

    2016-01-01

    Replicating oncolytic viruses represent a promising treatment approach against cancer, specifically targeting the tumor cells. Significant progress has been made through experimental and clinical studies. Besides these approaches, however, mathematical models can be useful when analyzing the dynamics of virus spread through tumors, because the interactions between a growing tumor and a replicating virus are complex and nonlinear, making them difficult to understand by experimentation alone. Mathematical models have provided significant biological insight into the field of virus dynamics, and similar approaches can be adopted to study oncolytic viruses. The review discusses this approach and highlights some of the challenges that need to be overcome in order to build mathematical and computation models that are clinically predictive. PMID:27001049

  10. Inconsistency in 9 mm bullets: correlation of jacket thickness to post-impact geometry measured with non-destructive X-ray computed tomography.

    PubMed

    Thornby, John; Landheer, Dirk; Williams, Tim; Barnes-Warden, Jane; Fenne, Paul; Norman, Daniel; Attridge, Alex; Williams, Mark A

    2014-01-01

    Fundamental to any ballistic armour standard is the reference projectile to be defeated. Typically, for certification purposes, a consistent and symmetrical bullet geometry is assumed, however variations in bullet jacket dimensions can have far reaching consequences. Traditionally, characteristics and internal dimensions have been analysed by physically sectioning bullets--an approach which is of restricted scope and which precludes subsequent ballistic assessment. The use of a non-destructive X-ray computed tomography (CT) method has been demonstrated and validated (Kumar et al., 2011 [15]); the authors now apply this technique to correlate bullet impact response with jacket thickness variations. A set of 20 bullets (9 mm DM11) were selected for comparison and an image-based analysis method was employed to map jacket thickness and determine the centre of gravity of each specimen. Both intra- and inter-bullet variations were investigated, with thickness variations of the order of 200 μm commonly found along the length of all bullets and angular variations of up to 50 μm in some. The bullets were subsequently impacted against a rigid flat plate under controlled conditions (observed on a high-speed video camera) and the resulting deformed projectiles were re-analysed. The results of the experiments demonstrate a marked difference in ballistic performance between bullets from different manufacturers and an asymmetric thinning of the jacket is observed in regions of pre-impact weakness. The conclusions are relevant for future soft armour standards and provide important quantitative data for numerical model correlation and development. The implications of the findings of the work on the reliability and repeatability of the industry standard V50 ballistic test are also discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Beyond the Melnikov method: A computer assisted approach

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Zgliczyński, Piotr

    2017-01-01

    We present a Melnikov type approach for establishing transversal intersections of stable/unstable manifolds of perturbed normally hyperbolic invariant manifolds (NHIMs). The method is based on a new geometric proof of the normally hyperbolic invariant manifold theorem, which establishes the existence of a NHIM, together with its associated invariant manifolds and bounds on their first and second derivatives. We do not need to know the explicit formulas for the homoclinic orbits prior to the perturbation. We also do not need to compute any integrals along such homoclinics. All needed bounds are established using rigorous computer assisted numerics. Lastly, and most importantly, the method establishes intersections for an explicit range of parameters, and not only for perturbations that are 'small enough', as is the case in the classical Melnikov approach.

  12. Computer Programs. A Systems Approach to Placement and Follow-Up: A Computer Model.

    ERIC Educational Resources Information Center

    Jones, Charles B.

    The computer programs utilized in a systems approach to job placement and followup for students in the Bryan Independent School District, of Bryan Texas public high schools are presented in this document. The programs are for record update, delete, change format, followup summary, followup detail, roster, mailer, and nonvocational followup…

  13. The Visualization Management System Approach To Visualization In Scientific Computing

    NASA Astrophysics Data System (ADS)

    Butler, David M.; Pendley, Michael H.

    1989-09-01

    We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.

  14. Style: A Computational and Conceptual Blending-Based Approach

    NASA Astrophysics Data System (ADS)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  15. A computer-aided approach to nonlinear control systhesis

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Anthony, Tobin

    1988-01-01

    The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.

  16. Uranyl-glycine-water complexes in solution: comprehensive computational modeling of coordination geometries, stabilization energies, and luminescence properties.

    PubMed

    Su, Jing; Zhang, Kai; Schwarz, W H Eugen; Li, Jun

    2011-03-21

    Comprehensive computational modeling of coordination structures, thermodynamic stabilities, and luminescence spectra of uranyl-glycine-water complexes [UO(2)(Gly)(n)aq(m)](2+) (Gly = glycine, aq = H(2)O, n = 0-2, m = 0-5) in aqueous solution has been carried out using relativistic density functional approaches. The solvent is approximated by a dielectric continuum model and additional explicit water molecules. Detailed pictures are obtained by synergic combination of experimental and theoretical data. The optimal equatorial coordination numbers of uranyl are determined to be five. The energies of several complex conformations are competitively close to each other. In non-basic solution the most probable complex forms are those with two water ligands replaced by the bidentate carboxyl groups of zwitterionic glycine. The N,O-chelation in non-basic solution is neither entropically nor enthalpically favored. The symmetric and antisymmetric stretch vibrations of the nearly linear O-U-O unit determine the luminescence features. The shapes of the vibrationally resolved experimental solution spectra are reproduced theoretically with an empirically fitted overall line-width parameter. The calculated luminescence origins correspond to thermally populated, near-degenerate groups of the lowest electronically excited states of (3)Δ(g) and (3)Φ(g) character, originating from (U-O)σ(u) → (U-5f)δ(u),ϕ(u) configurations of the linear [OUO](2+) unit. The intensity distributions of the vibrational progressions are consistent with U-O bond-length changes around 5 1/2 pm. The unusually high intensity of the short wavelength foot is explained by near-degeneracy of vibrationally and electronically excited states, and by intensity enhancement through the asymmetric O-U-O stretch mode. The combination of contemporary computational chemistry and experimental techniques leads to a detailed understanding of structures, thermodynamics, and luminescence of actinide compounds, including

  17. A Computer Code for 2-D Transport Calculations in x-y Geometry Using the Interface Current Method.

    SciTech Connect

    1990-12-01

    Version 00 RICANT performs 2-dimensional neutron transport calculations in x-y geometry using the interface current method. In the interface current method, the angular neutron currents crossing region surfaces are expanded in terms of the Legendre polynomials in the two half-spaces made by the region surfaces.

  18. Computer-based Approaches for Training Interactive Digital Map Displays

    DTIC Science & Technology

    2005-09-01

    SUPPLEMENTARY NOTES Subject Matter POC: Jean L. Dyer 14. ABSTRACT (Maximum 200 words): Five computer-based training approaches for learning digital skills...Training assessment Exploratory Learning Guided ExploratoryTraining Guided Discovery SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21...the other extreme of letting Soldiers learn a digital interface on their own. The research reported here examined these two conditions and three other

  19. A computational approach for the health care market.

    PubMed

    Montefiori, Marcello; Resta, Marina

    2009-12-01

    In this work we analyze the market for health care through a computational approach that relies on Kohonen's Self-Organizing Maps, and we observe the competition dynamics of health care providers versus those of patients. As a result, we offer a new tool addressing the issue of hospital behaviour and demand mechanism modelling, which conjugates a robust theoretical implementation together with an instrument of deep graphical impact.

  20. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  1. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-12-31

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  2. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  3. Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry studies.

    PubMed

    Anquez, Jérémie; Boubekeur, Tamy; Bibin, Lazar; Angelini, Elsa; Bloch, Isabelle

    2009-01-01

    Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

  4. Examples of computational approaches for elliptic, possibly multiscale PDEs with random inputs

    NASA Astrophysics Data System (ADS)

    Le Bris, Claude; Legoll, Frédéric

    2017-01-01

    We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.

  5. Conversion Coefficients for Proton Beams using Standing and Sitting Male Hybrid Computational Phantom Calculated in Idealized Irradiation Geometries.

    PubMed

    Alves, M C; Santos, W S; Lee, C; Bolch, W E; Hunt, J G; Júnior, A B Carvalho

    2016-09-24

    The aim of this study was the calculation of conversion coefficients for absorbed doses per fluence (DT/Φ) using the sitting and standing male hybrid phantom (UFH/NCI) exposure to monoenergetic protons with energy ranging from 2 MeV to 10 GeV. Sex-averaged effective dose per fluence (E/Φ) using the results of DT/Φ for the male and female hybrid phantom in standing and sitting postures were also calculated. Results of E/Φ of UFH/NCI standing phantom were also compared with tabulated effective dose conversion coefficients provided in ICRP publication 116. To develop an exposure scenario implementing the male UFH/NCI phantom in sitting and standing postures was used the radiation transport code MCNPX. Whole-body irradiations were performed using the recommended irradiation geometries by ICRP publication 116 antero-posterior (AP), postero-anterior (PA), right and left lateral, rotational (ROT) and isotropic (ISO). In most organs, the conversion coefficients DT/Φ were similar for both postures. However, relative differences were significant for organs located in the lower abdominal region, such as prostate, testes and urinary bladder, especially in the AP geometry. Results of effective dose conversion coefficients were 18% higher in the standing posture of the UFH/NCI phantom, especially below 100 MeV in AP and PA. In lateral geometry, the conversion coefficients values below 20 MeV were 16% higher in the sitting posture. In ROT geometry, the differences were below 10%, for almost all energies. In ISO geometry, the differences in E/Φ were negligible. The results of E/Φ of UFH/NCI phantom were in general below the results of the conversion coefficients provided in ICRP publication 116.

  6. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  7. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  8. A computational language approach to modeling prose recall in schizophrenia

    PubMed Central

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W.; Elvevåg, Brita

    2014-01-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122

  9. Gyrokinetic Simulations of Microinstabilities in Stellarator Geometry

    SciTech Connect

    J.L.V. Lewandowski

    2003-08-29

    A computational study of microinstabilities in general geometry is presented. The ion gyrokinetic is solved as an initial value problem. The advantage of this approach is the accurate treatment of some important kinetic effects. The magnetohydrodynamic equilibrium is obtained from a three-dimensional local equilibrium model. The use of a local magnetohydrodynamic equilibrium model allows for a computationally-efficient systematic study of the impact of the magnetic structure on microinstabilities.

  10. Toward a verifiable approach to the design of concurrent computations

    SciTech Connect

    Chisholm, G.H.

    1993-01-01

    Distributed programs are dependent on explicit message passing between disjoint components of the computation. This paper is concerned with investigating an approach for proving correctness of distributed programs under an assumed data-exchange capability. Stated informally, the data exchange assumption is that every message is passed correctly, i.e., neither lost nor corrupted. One approach for constructing a proof under this assumption would be to embed an abstract model of the data communications mechanism into the program specification. The Message Passing Interface (MPI) standard provides a basis for such a modal. In support of our investigations, we have developed a high-level specification using the ASLAN specification language. Our specification is based on a generalized communications model from which the MPI modelmay be derived. We describe the specification of this model and an approach to the specification of distributed programs with explicit message passing based on a verifiable data exchange model.

  11. Toward a verifiable approach to the design of concurrent computations

    SciTech Connect

    Chisholm, G.H.

    1993-05-01

    Distributed programs are dependent on explicit message passing between disjoint components of the computation. This paper is concerned with investigating an approach for proving correctness of distributed programs under an assumed data-exchange capability. Stated informally, the data exchange assumption is that every message is passed correctly, i.e., neither lost nor corrupted. One approach for constructing a proof under this assumption would be to embed an abstract model of the data communications mechanism into the program specification. The Message Passing Interface (MPI) standard provides a basis for such a modal. In support of our investigations, we have developed a high-level specification using the ASLAN specification language. Our specification is based on a generalized communications model from which the MPI modelmay be derived. We describe the specification of this model and an approach to the specification of distributed programs with explicit message passing based on a verifiable data exchange model.

  12. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  13. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    SciTech Connect

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm) and the field integral of the solution (L{sup 2} norm).

  14. A Computer Vision Approach to Identify Einstein Rings and Arcs

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  15. Computational neuroscience approach to biomarkers and treatments for mental disorders.

    PubMed

    Yahata, Noriaki; Kasai, Kiyoto; Kawato, Mitsuo

    2017-04-01

    Psychiatry research has long experienced a stagnation stemming from a lack of understanding of the neurobiological underpinnings of phenomenologically defined mental disorders. Recently, the application of computational neuroscience to psychiatry research has shown great promise in establishing a link between phenomenological and pathophysiological aspects of mental disorders, thereby recasting current nosology in more biologically meaningful dimensions. In this review, we highlight recent investigations into computational neuroscience that have undertaken either theory- or data-driven approaches to quantitatively delineate the mechanisms of mental disorders. The theory-driven approach, including reinforcement learning models, plays an integrative role in this process by enabling correspondence between behavior and disorder-specific alterations at multiple levels of brain organization, ranging from molecules to cells to circuits. Previous studies have explicated a plethora of defining symptoms of mental disorders, including anhedonia, inattention, and poor executive function. The data-driven approach, on the other hand, is an emerging field in computational neuroscience seeking to identify disorder-specific features among high-dimensional big data. Remarkably, various machine-learning techniques have been applied to neuroimaging data, and the extracted disorder-specific features have been used for automatic case-control classification. For many disorders, the reported accuracies have reached 90% or more. However, we note that rigorous tests on independent cohorts are critically required to translate this research into clinical applications. Finally, we discuss the utility of the disorder-specific features found by the data-driven approach to psychiatric therapies, including neurofeedback. Such developments will allow simultaneous diagnosis and treatment of mental disorders using neuroimaging, thereby establishing 'theranostics' for the first time in clinical

  16. Steady-State Fluorescence of Highly Absorbing Samples in Transmission Geometry: A Simplified Quantitative Approach Considering Reabsorption Events.

    PubMed

    Krimer, Nicolás I; Rodrigues, Darío; Rodríguez, Hernán B; Mirenda, Martín

    2017-01-03

    A simplified methodology to acquire steady-state emission spectra and quantum yields of highly absorbing samples is presented. The experimental setup consists of a commercial spectrofluorometer adapted to transmission geometry, allowing the detection of the emitted light at 180° with respect to the excitation beam. The procedure includes two different mathematical approaches to describe and reproduce the distortions caused by reabsorption on emission spectra and quantum yields. Toluene solutions of 9,10-diphenylanthracence, DPA, with concentrations ranging between 1.12 × 10(-5) and 1.30 × 10(-2) M, were used to validate the proposed methodology. This dye has significant probability of reabsorption and re-emission in concentrated solutions without showing self-quenching or aggregation phenomena. The results indicate that the reabsorption corrections, applied on molecular emission spectra and quantum yields of the samples, accurately reproduce experimental data. A further discussion is performed concerning why the re-emitted radiation is not detected in the experiments, even at the highest DPA concentrations.

  17. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    PubMed Central

    Beatty, Perrin H.; Klein, Matthias S.; Fischer, Jeffrey J.; Lewis, Ian A.; Muench, Douglas G.; Good, Allen G.

    2016-01-01

    A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE) in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields. PMID:27735856

  18. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  19. Solubility of nonelectrolytes: a first-principles computational approach.

    PubMed

    Jackson, Nicholas E; Chen, Lin X; Ratner, Mark A

    2014-05-15

    Using a combination of classical molecular dynamics and symmetry adapted intermolecular perturbation theory, we develop a high-accuracy computational method for examining the solubility energetics of nonelectrolytes. This approach is used to accurately compute the cohesive energy density and Hildebrand solubility parameters of 26 molecular liquids. The energy decomposition of symmetry adapted perturbation theory is then utilized to develop multicomponent Hansen-like solubility parameters. These parameters are shown to reproduce the solvent categorizations (nonpolar, polar aprotic, or polar protic) of all molecular liquids studied while lending quantitative rigor to these qualitative categorizations via the introduction of simple, easily computable parameters. Notably, we find that by monitoring the first-order exchange energy contribution to the total interaction energy, one can rigorously determine the hydrogen bonding character of a molecular liquid. Finally, this method is applied to compute explicitly the Flory interaction parameter and the free energy of mixing for two different small molecule mixtures, reproducing the known miscibilities. This methodology represents an important step toward the prediction of molecular solubility from first principles.

  20. A GPU-computing Approach to Solar Stokes Profile Inversion

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.; Mighell, Kenneth J.

    2012-09-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  1. Computing electronic structures: A new multiconfiguration approach for excited states

    NASA Astrophysics Data System (ADS)

    Cancès, Éric; Galicher, Hervé; Lewin, Mathieu

    2006-02-01

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latters. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H2 molecule.

  2. Computational approaches in the design of synthetic receptors - A review.

    PubMed

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology.

  3. Computing electronic structures: A new multiconfiguration approach for excited states

    SciTech Connect

    Cances, Eric . E-mail: cances@cermics.enpc.fr; Galicher, Herve . E-mail: galicher@cermics.enpc.fr; Lewin, Mathieu . E-mail: lewin@cermic.enpc.fr

    2006-02-10

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H {sub 2} molecule.

  4. A GPU-COMPUTING APPROACH TO SOLAR STOKES PROFILE INVERSION

    SciTech Connect

    Harker, Brian J.; Mighell, Kenneth J. E-mail: mighell@noao.edu

    2012-09-20

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  5. Comparison of phantom and computer-simulated MR images of flow in a convergent geometry: implications for improved two-dimensional MR angiography.

    PubMed

    Siegel, J M; Oshinski, J N; Pettigrew, R I; Ku, D N

    1995-01-01

    The signal loss that occurs in regions of disturbed flow significantly decreases the clinical usefulness of MR angiography in the imaging of diseased arteries. This signal loss is most often attributed to turbulent flow; but on a typical MR angiogram, the signal is lost in the nonturbulent upstream region of the stenosis as well as in the turbulent downstream region. In the current study we used a flow phantom with a forward-facing step geometry to model the upstream region. The flow upstream of the step was convergent, which created high levels of convective acceleration. This region of the flow field contributes to signal loss at the constriction, leading to overestimation of the area of stenosis reduction. A computer program was designed to simulate the image artifacts that would be caused by this geometry in two-dimensional time-of-flight MR angiography. Simulated images were compared with actual phantom images and the flow artifacts were highly correlated. The computer simulation was then used to test the effects of different orders of motion compensation and of fewer pixels per diameter, as would be present in MR angiograms of small arteries. The results indicated that the computational simulation of flow artifacts upstream of the stenosis provides an important tool in the design of optimal imaging sequences for the reduction of signal loss.

  6. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  7. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  8. Computational approaches for fragment-based and de novo design.

    PubMed

    Loving, Kathryn; Alberts, Ian; Sherman, Woody

    2010-01-01

    Fragment-based and de novo design strategies have been used in drug discovery for years. The methodologies for these strategies are typically discussed separately, yet the applications of these techniques overlap substantially. We present a review of various fragment-based discovery and de novo design protocols with an emphasis on successful applications in real-world drug discovery projects. Furthermore, we illustrate the strengths and weaknesses of the various approaches and discuss how one method can be used to complement another. We also discuss how the incorporation of experimental data as constraints in computational models can produce novel compounds that occupy unique areas in intellectual property (IP) space yet are biased toward the desired chemical property space. Finally, we present recent research results suggesting that computational tools applied to fragment-based discovery and de novo design can have a greater impact on the discovery process when coupled with the right experiments.

  9. Slide Star: An Approach to Videodisc/Computer Aided Instruction

    PubMed Central

    McEnery, Kevin W.

    1984-01-01

    One of medical education's primary goals is for the student to be proficient in the gross and microscopic identification of disease. The videodisc, with its storage capacity of up to 54,000 photomicrographs is ideally suited to assist in this educational process. “Slide Star” is a method of interactive instruction which is designed for use in any subject where it is essential to identify visual material. The instructional approach utilizes a computer controlled videodisc to display photomicrographs. In the demonstration program, these are slides of normal blood cells. The program is unique in that the instruction is created by the student's commands manipulating the photomicrograph data base. A prime feature is the use of computer generated multiple choice questions to reinforce the learning process.

  10. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    PubMed

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone".

  11. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  12. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  13. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  14. Combined use of computed tomography and the lattice-Boltzmann method to investigate the influence of pore geometry of porous media on the permeability tensor

    NASA Astrophysics Data System (ADS)

    Striblet, J. C.; Rush, L.; Floyd, M.; Porter, M. L.; Al-Raoush, R. I.

    2011-12-01

    The objective of this work was to investigate the impact of pore geometry of porous media on the permeability tensor. High-resolution, three-dimensional maps of natural sand systems, comprising a range of grain sizes and shapes were obtained using Synchrotron microtomography. The lattice-Boltzmann (LB) method was used to simulate saturated flow through these packs to characterize the impact of particle shape on the permeability tensor. LB computations of permeability tensor and their dependency on the internal structure of porous media will be presented and discussed.

  15. Degree of rate control approach to computational catalyst screening

    SciTech Connect

    Wolcott, Christopher A.; Medford, Andrew J.; Studt, Felix; Campbell, Charles T.

    2015-10-01

    A new method for computational catalyst screening that is based on the concept of the degree of rate control (DRC) is introduced. It starts by developing a full mechanism and microkinetic model at the conditions of interest for a reference catalyst (ideally, the best known material) and then determines the degrees of rate control of the species in the mechanism (i.e., all adsorbed intermediates and transition states). It then uses the energies of the few species with the highest DRCs for this reference catalyst as descriptors to estimate the rates on related materials and predict which are most active. The predictions of this method regarding the relative rates of twelve late transition metals for methane steam reforming, using the Rh(2 1 1) surface as the reference catalyst, are compared to the most commonly-used approach for computation catalyst screening, the Nørskov–Bligaard (NB) method which uses linear scaling relationships to estimate the energies of all adsorbed intermediates and transition states. It is slightly more accurate than the NB approach when the metals are similar to the reference metal (<0.5 eV different on a plot where the axes are the bond energies to C and O adatoms), but worse when too different from the reference. It is computationally faster than the NB method when screening a moderate number of materials (<100), thus adding a valuable complement to the NB approach. It can be implemented without a microkinetic model if the degrees of rate control are already known approximately, e.g., from experiments.

  16. Computer-assisted adjuncts for aneurysmal morphologic assessment: toward more precise and accurate approaches

    NASA Astrophysics Data System (ADS)

    Rajabzadeh-Oghaz, Hamidreza; Varble, Nicole; Davies, Jason M.; Mowla, Ashkan; Shakir, Hakeem J.; Sonig, Ashish; Shallwani, Hussain; Snyder, Kenneth V.; Levy, Elad I.; Siddiqui, Adnan H.; Meng, Hui

    2017-03-01

    Neurosurgeons currently base most of their treatment decisions for intracranial aneurysms (IAs) on morphological measurements made manually from 2D angiographic images. These measurements tend to be inaccurate because 2D measurements cannot capture the complex geometry of IAs and because manual measurements are variable depending on the clinician's experience and opinion. Incorrect morphological measurements may lead to inappropriate treatment strategies. In order to improve the accuracy and consistency of morphological analysis of IAs, we have developed an image-based computational tool, AView. In this study, we quantified the accuracy of computer-assisted adjuncts of AView for aneurysmal morphologic assessment by performing measurement on spheres of known size and anatomical IA models. AView has an average morphological error of 0.56% in size and 2.1% in volume measurement. We also investigate the clinical utility of this tool on a retrospective clinical dataset and compare size and neck diameter measurement between 2D manual and 3D computer-assisted measurement. The average error was 22% and 30% in the manual measurement of size and aneurysm neck diameter, respectively. Inaccuracies due to manual measurements could therefore lead to wrong treatment decisions in 44% and inappropriate treatment strategies in 33% of the IAs. Furthermore, computer-assisted analysis of IAs improves the consistency in measurement among clinicians by 62% in size and 82% in neck diameter measurement. We conclude that AView dramatically improves accuracy for morphological analysis. These results illustrate the necessity of a computer-assisted approach for the morphological analysis of IAs.

  17. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  18. Approaches to Computer Modeling of Phosphate Hide-Out.

    DTIC Science & Technology

    1984-06-28

    phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C

  19. Physiologically based computational approach to camouflage and masking patterns

    NASA Astrophysics Data System (ADS)

    Irvin, Gregg E.; Dowler, Michael G.

    1992-09-01

    A computational system was developed to integrate both Fourier image processing techniques and biologically based image processing techniques. The Fourier techniques allow the spatially global manipulation of phase and amplitude spectra. The biologically based techniques allow for spatially localized manipulation of phase, amplitude and orientation independently on multiple spatial frequency scales. These techniques combined with a large variety of basic image processing functions allow for a versatile and systematic approach to be taken toward the development of specialized patterning and visual textures. Current applications involve research for the development of 2-dimensional spatial patterning that can function as effective camouflage patterns and masking patterns for the human visual system.

  20. A pencil beam approach to proton computed tomography

    SciTech Connect

    Rescigno, Regina Bopp, Cécile; Rousseau, Marc; Brasse, David

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  1. Exploiting Self-organization in Bioengineered Systems: A Computational Approach.

    PubMed

    Davis, Delin; Doloman, Anna; Podgorski, Gregory J; Vargis, Elizabeth; Flann, Nicholas S

    2017-01-01

    The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches.

  2. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  3. A Computational Approach for Identifying Synergistic Drug Combinations

    PubMed Central

    Gayvert, Kaitlyn M.; Aly, Omar; Bosenberg, Marcus W.; Stern, David F.; Elemento, Olivier

    2017-01-01

    A promising alternative to address the problem of acquired drug resistance is to rely on combination therapies. Identification of the right combinations is often accomplished through trial and error, a labor and resource intensive process whose scale quickly escalates as more drugs can be combined. To address this problem, we present a broad computational approach for predicting synergistic combinations using easily obtainable single drug efficacy, no detailed mechanistic understanding of drug function, and limited drug combination testing. When applied to mutant BRAF melanoma, we found that our approach exhibited significant predictive power. Additionally, we validated previously untested synergy predictions involving anticancer molecules. As additional large combinatorial screens become available, this methodology could prove to be impactful for identification of drug synergy in context of other types of cancers. PMID:28085880

  4. Exploiting Self-organization in Bioengineered Systems: A Computational Approach

    PubMed Central

    Davis, Delin; Doloman, Anna; Podgorski, Gregory J.; Vargis, Elizabeth; Flann, Nicholas S.

    2017-01-01

    The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches. PMID:28503548

  5. Proof in Transformation Geometry

    ERIC Educational Resources Information Center

    Bell, A. W.

    1971-01-01

    The first of three articles showing how inductively-obtained results in transformation geometry may be organized into a deductive system. This article discusses two approaches to enlargement (dilatation), one using coordinates and the other using synthetic methods. (MM)

  6. Proof in Transformation Geometry

    ERIC Educational Resources Information Center

    Bell, A. W.

    1971-01-01

    The first of three articles showing how inductively-obtained results in transformation geometry may be organized into a deductive system. This article discusses two approaches to enlargement (dilatation), one using coordinates and the other using synthetic methods. (MM)

  7. Facilitating Understandings of Geometry.

    ERIC Educational Resources Information Center

    Pappas, Christine C.; Bush, Sara

    1989-01-01

    Illustrates some learning encounters for facilitating first graders' understanding of geometry. Describes some of children's approaches using Cuisenaire rods and teacher's intervening. Presents six problems involving various combinations of Cuisenaire rods and cubes. (YP)

  8. Integration of Computational Geometry, Finite Element, and Multibody System Algorithms for the Development of New Computational Methodology for High-Fidelity Vehicle Systems Modeling and Simulation

    DTIC Science & Technology

    2013-04-11

    suited for efficient communications with CAD systems. It is the main objective of phase I of this SBIR project to demonstrate the feasibility of...for efficient communications with CAD systems. It is the main objective of phase I of this SBIR project to demonstrate the feasibility of developing a...civilian wheeled and tracked vehicle models that include significant details. The new software technology will allow for: 1) preserving CAD geometry

  9. Integration of Computational Geometry, Finite Element, and Multibody System Algorithms for the Development of New Computational Methodology for High-Fidelity Vehicle Systems Modeling and Simulation. ADDENDUM

    DTIC Science & Technology

    2013-11-12

    suited for efficient communications with CAD systems. It is the main objective of phase I and Phase I Option of this SBIR project to demonstrate the...with CAD systems. It is the main objective of phase I and Phase I Option of this SBIR project to demonstrate the feasibility of developing a new MBS...wheeled and tracked vehicle models that include significant details. The new software technology will allow for: 1) preserving CAD geometry when FE

  10. Computational inference of gene regulatory networks: Approaches, limitations and opportunities.

    PubMed

    Banf, Michael; Rhee, Seung Y

    2017-01-01

    Gene regulatory networks lie at the core of cell function control. In E. coli and S. cerevisiae, the study of gene regulatory networks has led to the discovery of regulatory mechanisms responsible for the control of cell growth, differentiation and responses to environmental stimuli. In plants, computational rendering of gene regulatory networks is gaining momentum, thanks to the recent availability of high-quality genomes and transcriptomes and development of computational network inference approaches. Here, we review current techniques, challenges and trends in gene regulatory network inference and highlight challenges and opportunities for plant science. We provide plant-specific application examples to guide researchers in selecting methodologies that suit their particular research questions. Given the interdisciplinary nature of gene regulatory network inference, we tried to cater to both biologists and computer scientists to help them engage in a dialogue about concepts and caveats in network inference. Specifically, we discuss problems and opportunities in heterogeneous data integration for eukaryotic organisms and common caveats to be considered during network model evaluation. This article is part of a Special Issue entitled: Plant Gene Regulatory Mechanisms and Networks, edited by Dr. Erich Grotewold and Dr. Nathan Springer.

  11. Learning about modes of speciation by computational approaches.

    PubMed

    Becquet, Céline; Przeworski, Molly

    2009-10-01

    How often do the early stages of speciation occur in the presence of gene flow? To address this enduring question, a number of recent papers have used computational approaches, estimating parameters of simple divergence models from multilocus polymorphism data collected in closely related species. Applications to a variety of species have yielded extensive evidence for migration, with the results interpreted as supporting the widespread occurrence of parapatric speciation. Here, we conduct a simulation study to assess the reliability of such inferences, using a program that we recently developed MCMC estimation of the isolation-migration model allowing for recombination (MIMAR) as well as the program isolation-migration (IM) of Hey and Nielsen (2004). We find that when one of many assumptions of the isolation-migration model is violated, the methods tend to yield biased estimates of the parameters, potentially lending spurious support for allopatric or parapatric divergence. More generally, our results highlight the difficulty in drawing inferences about modes of speciation from the existing computational approaches alone.

  12. Computational Approach to Dendritic Spine Taxonomy and Shape Transition Analysis

    PubMed Central

    Bokota, Grzegorz; Magnowska, Marta; Kuśmierczyk, Tomasz; Łukasik, Michał; Roszkowska, Matylda; Plewczynski, Dariusz

    2016-01-01

    The common approach in morphological analysis of dendritic spines of mammalian neuronal cells is to categorize spines into subpopulations based on whether they are stubby, mushroom, thin, or filopodia shaped. The corresponding cellular models of synaptic plasticity, long-term potentiation, and long-term depression associate the synaptic strength with either spine enlargement or spine shrinkage. Although a variety of automatic spine segmentation and feature extraction methods were developed recently, no approaches allowing for an automatic and unbiased distinction between dendritic spine subpopulations and detailed computational models of spine behavior exist. We propose an automatic and statistically based method for the unsupervised construction of spine shape taxonomy based on arbitrary features. The taxonomy is then utilized in the newly introduced computational model of behavior, which relies on transitions between shapes. Models of different populations are compared using supplied bootstrap-based statistical tests. We compared two populations of spines at two time points. The first population was stimulated with long-term potentiation, and the other in the resting state was used as a control. The comparison of shape transition characteristics allowed us to identify the differences between population behaviors. Although some extreme changes were observed in the stimulated population, statistically significant differences were found only when whole models were compared. The source code of our software is freely available for non-commercial use1. Contact: d.plewczynski@cent.uw.edu.pl. PMID:28066226

  13. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  14. Computational Approaches for Translational Oncology: Concepts and Patents.

    PubMed

    Scianna, Marco; Munaron, Luca

    2016-01-01

    Cancer is a heterogeneous disease, which is based on an intricate network of processes at different spatiotemporal scales, from the genome to the tissue level. Hence the necessity for the biomedical and pharmaceutical research to work in a multiscale fashion. In this respect, a significant help derives from the collaboration with theoretical sciences. Mathematical models can in fact provide insights into tumor-related processes and support clinical oncologists in the design of treatment regime, dosage, schedule and toxicity. The main objective of this article is to review the recent computational-based patents which tackle some relevant aspects of tumor treatment. We first analyze a series of patents concerning the purposing the purposing or repurposing of anti-tumor compounds. These approaches rely on pharmacokinetics and pharmacodynamics modules, that incorporate data obtained in the different phases of clinical trials. Similar methods are also at the basis of other patents included in this paper, which deal with treatment optimization, in terms of maximizing therapy efficacy while minimizing side effects on the host. A group of patents predicting drug response and tumor evolution by the use of kinetics graphs are commented as well. We finally focus on patents that implement informatics tools to map and screen biological, medical, and pharmaceutical knowledge. Despite promising aspects (and an increasing amount of the relative literature), we found few computational-based patents: there is still a significant effort to do for allowing modelling approaches to become an integral component of the pharmaceutical research.

  15. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications.

  16. Computational approaches to understand cardiac electrophysiology and arrhythmias

    PubMed Central

    Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.

    2012-01-01

    Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409

  17. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  18. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  19. An adaptive three-dimensional Cartesian approach for the parallel computation of inviscid flow about static and dynamic configurations

    NASA Astrophysics Data System (ADS)

    Hunt, Jason Daniel

    An adaptive three-dimensional Cartesian approach for the parallel computation of compressible flow about static and dynamic configurations has been developed and validated. This is a further step towards a goal that remains elusive for CFD codes: the ability to model complex dynamic-geometry problems in a quick and automated manner. The underlying flow-solution method solves the three-dimensional Euler equations using a MUSCL-type finite-volume approach to achieve higher-order spatial accuracy. The flow solution, either steady or unsteady, is advanced in time via a two-stage time-stepping scheme. This basic solution method has been incorporated into a parallel block-adaptive Cartesian framework, using a block-octtree data structure to represent varying spatial resolution, and to compute flow solutions in parallel. The ability to represent static geometric configurations has been introduced by cutting a geometric configuration out of a background block-adaptive Cartesian grid, then solving for the flow on the resulting volume grid. This approach has been extended for dynamic geometric configurations: components of a given configuration were permitted to independently move, according to prescribed rigid-body motion. Two flow-solver difficulties arise as a result of introducing static and dynamic configurations: small time steps; and the disappearance/appearance of cell volume during a time integration step. Both of these problems have been remedied through cell merging. The concept of cell merging and its implementation within the parallel block-adaptive method is described. While the parallelization of certain grid-generation and cell-cutting routines resulted from this work, the most significant contribution was developing the novel cell-merging paradigm that was incorporated into the parallel block-adaptive framework. Lastly, example simulations both to validate the developed method and to demonstrate its full capabilities have been carried out. A simple, steady

  20. On the Geometry of Space, Time, Energy, and Mass: Empirical Validation of the Computational Unified Field Theory

    NASA Astrophysics Data System (ADS)

    Bentwich, Jonathan

    The principle contradiction that exists between Quantum Mechanics and Relativity Theory constitutes the biggest unresolved enigma in modern Science. To date, none of the candidate theory of everything (TOE) models received any satisfactory empirical validation. A new hypothetical Model called: the `Computational Unified Field Theory' (CUFT) was discovered over the past three years. In this paper it will be shown that CUFT is capable of resolving the key theoretical inconsistencies between quantum and relativistic models. Additionally, the CUFT fully integrates the four physical parameters of space, time, energy and mass as secondary computational features of a singular universal computational principle (UCP) which produces the entire physical universe as an extremely rapid series of spatially exhaustive `Universal Simultaneous Computational Frames' (USCF) embodied within a novel `Universal Computational Formula' (UCF). An empirical validation of the CUFT as a satisfactory TOE is given based on the recently discovered `Proton Radius Puzzle', which confirms one of the CUFT `differential-critical predictions' distinguishing it from both quantum and relativistic models.

  1. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  2. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  3. A Computer Program for Variable-Geometry Single-Stage Axial Compressor Test Data Analysis (UD0400).

    DTIC Science & Technology

    1981-09-01

    coefficients ,n A n, B n, and D n These are determined from Al 1.0 E1 -1-.0 D1 -0.0 (15) and 48 (x -xn (xn - xn B X -xB = n + 1 n B=n 6 D n+1Yn Yn- Yn-1 Xn Xn... coefficients may be determined from M = ANDN 1-S B -A N 1B (17) and D -BM M = n n (18)n A n In Equation 18, n is varied from N - I to 1. e. Computing...a Given Static Pressure One option in the program is to determine the blockage coefficient at a computing station such that the computed static

  4. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  5. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.

  6. Computational Diagnostic: A Novel Approach to View Medical Data.

    SciTech Connect

    Mane, K. K.; Börner, K.

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious choices about patient

  7. Computational Approach to Structural Alerts: Furans, Phenols, Nitroaromatics, and Thiophenes.

    PubMed

    Dang, Na Le; Hughes, Tyler B; Miller, Grover P; Swamidass, S Joshua

    2017-04-17

    Structural alerts are commonly used in drug discovery to identify molecules likely to form reactive metabolites and thereby become toxic. Unfortunately, as useful as structural alerts are, they do not effectively model if, when, and why metabolism renders safe molecules toxic. Toxicity due to a specific structural alert is highly conditional, depending on the metabolism of the alert, the reactivity of its metabolites, dosage, and competing detoxification pathways. A systems approach, which explicitly models these pathways, could more effectively assess the toxicity risk of drug candidates. In this study, we demonstrated that mathematical models of P450 metabolism can predict the context-specific probability that a structural alert will be bioactivated in a given molecule. This study focuses on the furan, phenol, nitroaromatic, and thiophene alerts. Each of these structural alerts can produce reactive metabolites through certain metabolic pathways but not always. We tested whether our metabolism modeling approach, XenoSite, can predict when a given molecule's alerts will be bioactivated. Specifically, we used models of epoxidation, quinone formation, reduction, and sulfur-oxidation to predict the bioactivation of furan-, phenol-, nitroaromatic-, and thiophene-containing drugs. Our models separated bioactivated and not-bioactivated furan-, phenol-, nitroaromatic-, and thiophene-containing drugs with AUC performances of 100%, 73%, 93%, and 88%, respectively. Metabolism models accurately predict whether alerts are bioactivated and thus serve as a practical approach to improve the interpretability and usefulness of structural alerts. We expect that this same computational approach can be extended to most other structural alerts and later integrated into toxicity risk models. This advance is one necessary step toward our long-term goal of building comprehensive metabolic models of bioactivation and detoxification to guide assessment and design of new therapeutic

  8. Computational approaches to substrate-based cell motility

    NASA Astrophysics Data System (ADS)

    Ziebert, Falko; Aranson, Igor S.

    2016-07-01

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realisation of active, self-propelled 'particles', a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modelling on the whole-cell level. Here we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organised systems such as living cells.

  9. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  10. Computational approaches to substrate-based cell motility

    DOE PAGES

    Ziebert, Falko; Aranson, Igor S.

    2016-07-15

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realization of active, self-propelled ‘particles’, a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity, and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recentlymore » in its modeling on the whole cell level. Furthermore we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organized systems such as living cells.« less

  11. Computational approaches to substrate-based cell motility

    SciTech Connect

    Ziebert, Falko; Aranson, Igor S.

    2016-07-15

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realization of active, self-propelled ‘particles’, a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity, and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modeling on the whole cell level. Furthermore we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organized systems such as living cells.

  12. Advancing risk assessment of engineered nanomaterials: application of computational approaches.

    PubMed

    Gajewicz, Agnieszka; Rasulev, Bakhtiyor; Dinadayalane, Tandabany C; Urbaszek, Piotr; Puzyn, Tomasz; Leszczynska, Danuta; Leszczynski, Jerzy

    2012-12-01

    Nanotechnology that develops novel materials at size of 100nm or less has become one of the most promising areas of human endeavor. Because of their intrinsic properties, nanoparticles are commonly employed in electronics, photovoltaic, catalysis, environmental and space engineering, cosmetic industry and - finally - in medicine and pharmacy. In that sense, nanotechnology creates great opportunities for the progress of modern medicine. However, recent studies have shown evident toxicity of some nanoparticles to living organisms (toxicity), and their potentially negative impact on environmental ecosystems (ecotoxicity). Lack of available data and low adequacy of experimental protocols prevent comprehensive risk assessment. The purpose of this review is to present the current state of knowledge related to the risks of the engineered nanoparticles and to assess the potential of efficient expansion and development of new approaches, which are offered by application of theoretical and computational methods, applicable for evaluation of nanomaterials. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Systems approaches to computational modeling of the oral microbiome

    PubMed Central

    Dimitrov, Dimiter V.

    2013-01-01

    Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet—oral microbiome—host mucosal transcriptome interactions. In particular, we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, and human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders. PMID:23847548

  14. A General Computational Approach for Repeat Protein Design

    PubMed Central

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Jayaraman, Seetharaman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T.; Hunt, John; Baker, David

    2014-01-01

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1 Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  15. Local-basis-function approach to computed tomography

    NASA Astrophysics Data System (ADS)

    Hanson, K. M.; Wecksung, G. W.

    1985-12-01

    In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.

  16. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  17. Computer aided diagnosis of prostate cancer: A texton based approach

    PubMed Central

    Rampun, Andrik; Tiddeman, Bernie; Zwiggelaar, Reyer; Malcolm, Paul

    2016-01-01

    Purpose: In this paper the authors propose a texton based prostate computer aided diagnosis approach which bypasses the typical feature extraction process such as filtering and convolution which can be computationally expensive. The study focuses the peripheral zone because 75% of prostate cancers start within this region and the majority of prostate cancers arising within this region are more aggressive than those arising in the transitional zone. Methods: For the model development, square patches were extracted at random locations from malignant and benign regions. Subsequently, extracted patches were aggregated and clustered using k-means clustering to generate textons that represent both regions. All textons together form a texton dictionary, which was used to construct a texton map for every peripheral zone in the training images. Based on the texton map, histogram models for each malignant and benign tissue samples were constructed and used as a feature vector to train our classifiers. In the testing phase, four machine learning algorithms were employed to classify each unknown sample tissue based on its corresponding feature vector. Results: The proposed method was tested on 418 T2-W MR images taken from 45 patients. Evaluation results show that the best three classifiers were Bayesian network (Az = 92.8% ± 5.9%), random forest (89.5% ± 7.1%), and k-NN (86.9% ± 7.5%). These results are comparable to the state-of-the-art in the literature. Conclusions: The authors have developed a prostate computer aided diagnosis method based on textons using a single modality of T2-W MRI without the need for the typical feature extraction methods, such as filtering and convolution. The proposed method could form a solid basis for a multimodality magnetic resonance imaging based systems. PMID:27782724

  18. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  19. A computational approach for deciphering the organization of glycosaminoglycans.

    PubMed

    Spencer, Jean L; Bernanke, Joel A; Buczek-Thomas, Jo Ann; Nugent, Matthew A

    2010-02-23

    Increasing evidence has revealed important roles for complex glycans as mediators of normal and pathological processes. Glycosaminoglycans are a class of glycans that bind and regulate the function of a wide array of proteins at the cell-extracellular matrix interface. The specific sequence and chemical organization of these polymers likely define function; however, identification of the structure-function relationships of glycosaminoglycans has been met with challenges associated with the unique level of complexity and the nontemplate-driven biosynthesis of these biopolymers. To address these challenges, we have devised a computational approach to predict fine structure and patterns of domain organization of the specific glycosaminoglycan, heparan sulfate (HS). Using chemical composition data obtained after complete and partial digestion of mixtures of HS chains with specific degradative enzymes, the computational analysis produces populations of theoretical HS chains with structures that meet both biosynthesis and enzyme degradation rules. The model performs these operations through a modular format consisting of input/output sections and three routines called chainmaker, chainbreaker, and chainsorter. We applied this methodology to analyze HS preparations isolated from pulmonary fibroblasts and epithelial cells. Significant differences in the general organization of these two HS preparations were observed, with HS from epithelial cells having a greater frequency of highly sulfated domains. Epithelial HS also showed a higher density of specific HS domains that have been associated with inhibition of neutrophil elastase. Experimental analysis of elastase inhibition was consistent with the model predictions and demonstrated that HS from epithelial cells had greater inhibitory activity than HS from fibroblasts. This model establishes the conceptual framework for a new class of computational tools to use to assess patterns of domain organization within

  20. A computational approach for identifying pathogenicity islands in prokaryotic genomes

    PubMed Central

    Yoon, Sung Ho; Hur, Cheol-Goo; Kang, Ho-Young; Kim, Yeoun Hee; Oh, Tae Kwang; Kim, Jihyun F

    2005-01-01

    Background Pathogenicity islands (PAIs), distinct genomic segments of pathogens encoding virulence factors, represent a subgroup of genomic islands (GIs) that have been acquired by horizontal gene transfer event. Up to now, computational approaches for identifying PAIs have been focused on the detection of genomic regions which only differ from the rest of the genome in their base composition and codon usage. These approaches often lead to the identification of genomic islands, rather than PAIs. Results We present a computational method for detecting potential PAIs in complete prokaryotic genomes by combining sequence similarities and abnormalities in genomic composition. We first collected 207 GenBank accessions containing either part or all of the reported PAI loci. In sequenced genomes, strips of PAI-homologs were defined based on the proximity of the homologs of genes in the same PAI accession. An algorithm reminiscent of sequence-assembly procedure was then devised to merge overlapping or adjacent genomic strips into a large genomic region. Among the defined genomic regions, PAI-like regions were identified by the presence of homolog(s) of virulence genes. Also, GIs were postulated by calculating G+C content anomalies and codon usage bias. Of 148 prokaryotic genomes examined, 23 pathogenic and 6 non-pathogenic bacteria contained 77 candidate PAIs that partly or entirely overlap GIs. Conclusion Supporting the validity of our method, included in the list of candidate PAIs were thirty four PAIs previously identified from genome sequencing papers. Furthermore, in some instances, our method was able to detect entire PAIs for those only partial sequences are available. Our method was proven to be an efficient method for demarcating the potential PAIs in our study. Also, the function(s) and origin(s) of a candidate PAI can be inferred by investigating the PAI queries comprising it. Identification and analysis of potential PAIs in prokaryotic genomes will broaden our

  1. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen.

  2. Slab-geometry Nd:glass laser performance studies

    NASA Technical Reports Server (NTRS)

    Eggleston, J. M.; Kane, T. J.; Byer, R. L.; Unternahrer, J.

    1982-01-01

    It is noted that slab-geometry solid-state lasers potentially provide significant performance improvements relative to conventional rod-geometry lasers. Experimental measurements that use an Nd:glass test-bed slab laser are presented. A comparison is made between the results and computer-model predictions of the slab-geometry approach. The computer model calculates and displays the temperature and stress fields in the slab, and on the basis of these predicts birefringence and index-of-refraction distributions. The effect that these distributions have on optical propagation is determined in a polarization-sensitive ray-tracing section of the model. Calculations are also made of stress-induced surface curvature and the resulting focusing effects. The measurements are found to be in good agreement with the computer-model predictions. It is concluded that the slab configuration offers significant laser-performance advantages in comparison with the traditional rod-laser geometry.

  3. Slab-geometry Nd:glass laser performance studies

    NASA Technical Reports Server (NTRS)

    Eggleston, J. M.; Kane, T. J.; Byer, R. L.; Unternahrer, J.

    1982-01-01

    It is noted that slab-geometry solid-state lasers potentially provide significant performance improvements relative to conventional rod-geometry lasers. Experimental measurements that use an Nd:glass test-bed slab laser are presented. A comparison is made between the results and computer-model predictions of the slab-geometry approach. The computer model calculates and displays the temperature and stress fields in the slab, and on the basis of these predicts birefringence and index-of-refraction distributions. The effect that these distributions have on optical propagation is determined in a polarization-sensitive ray-tracing section of the model. Calculations are also made of stress-induced surface curvature and the resulting focusing effects. The measurements are found to be in good agreement with the computer-model predictions. It is concluded that the slab configuration offers significant laser-performance advantages in comparison with the traditional rod-laser geometry.

  4. Teaching of Geometry in Bulgaria

    ERIC Educational Resources Information Center

    Bankov, Kiril

    2013-01-01

    Geometry plays an important role in the school mathematics curriculum all around the world. Teaching of geometry varies a lot (Hoyls, Foxman, & Kuchemann, 2001). Many countries revise the objectives, the content, and the approaches to the geometry in school. Studies of the processes show that there are not common trends of these changes…

  5. Quasi-relativistic modeltotential approach. Spin-orbit effects on energies and geometries of several di- and tri-atomic molecules

    NASA Astrophysics Data System (ADS)

    Hafner, P.; Habitz, P.; Ishikawa, Y.; Wechsel-Trakowski, E.; Schwarz, W. H. E.

    1981-06-01

    Calculations on ground and valence-excited states of Au +2, Tl 2 and Pb 2, and on the ground states of HgCl 2, PbCl 2 and PbH 2 have teen performed within the Kramers-restricteu self-consistent-field approach using a quasi-relativitistic model-potential hamiltonian. The influence of spin—orbit coupling on molecular orbitals, bond energies and geometries is discussed.

  6. Influence of LVAD cannula outflow tract location on hemodynamics in the ascending aorta: a patient-specific computational fluid dynamics approach.

    PubMed

    Karmonik, Christof; Partovi, Sasan; Loebe, Matthias; Schmack, Bastian; Ghodsizad, Ali; Robbin, Mark R; Noon, George P; Kallenbach, Klaus; Karck, Matthias; Davies, Mark G; Lumsden, Alan B; Ruhparwar, Arjang

    2012-01-01

    To develop a better understanding of the hemodynamic alterations in the ascending aorta, induced by variation of the cannula outflow position of the left ventricular assist device (LVAD) device based on patient-specific geometries, transient computational fluid dynamics (CFD) simulations using the realizable k-ε turbulent model were conducted for two of the most common LVAD outflow geometries. Thoracic aortic flow patterns, pressures, wall shear stresses (WSSs), turbulent dissipation, and energy were quantified in the ascending aorta at the location of the cannula outflow. Streamlines for the lateral geometry showed a large region of disturbed flow surrounding the LVAD outflow with an impingement zone at the contralateral wall exhibiting increased WSSs and pressures. Flow disturbance was reduced for the anterior geometries with clearly reduced pressures and WSSs. Turbulent dissipation was higher for the lateral geometry and turbulent energy was lower. Variation in the position of the cannula outflow clearly affects hemodynamics in the ascending aorta favoring an anterior geometry for a more ordered flow pattern. The new patient-specific approach used in this study for LVAD patients emphasizes the potential use of CFD as a truly translational technique.

  7. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  8. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  9. Variable geometry trusses

    NASA Technical Reports Server (NTRS)

    Robertshaw, H. H.; Reinholtz, C. F.

    1989-01-01

    Vibration control and kinematic control with variable-geometry trusses are covered. The analytical approach taken is to model each actuator with lumped masses and model a beam with finite elements, including in each model the generalized reaction forces from the beam on the actuator or vice versa. It is concluded that, from an operational standpoint, the variable-geometry truss actuator is more favorable than the inertia-type actuator. A spatial variable-geometry truss is used to test out rudimentary robotic tasks.

  10. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    NASA Astrophysics Data System (ADS)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  11. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications

    NASA Astrophysics Data System (ADS)

    Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey

    2008-08-01

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy

  12. Subtracted geometry

    NASA Astrophysics Data System (ADS)

    Saleem, Zain Hamid

    In this thesis we study a special class of black hole geometries called subtracted geometries. Subtracted geometry black holes are obtained when one omits certain terms from the warp factor of the metric of general charged rotating black holes. The omission of these terms allows one to write the wave equation of the black hole in a completely separable way and one can explicitly see that the wave equation of a massless scalar field in this slightly altered background of a general multi-charged rotating black hole acquires an SL(2, R) x SL(2, R) x SO(3) symmetry. The "subtracted limit" is considered an appropriate limit for studying the internal structure of the non-subtracted black holes because new 'subtracted' black holes have the same horizon area and periodicity of the angular and time coordinates in the near horizon regions as the original black hole geometry it was constructed from. The new geometry is asymptotically conical and is physically similar to that of a black hole in an asymptotically confining box. We use the different nice properties of these geometries to understand various classically and quantum mechanically important features of general charged rotating black holes.

  13. 3D Reconstruction of Chick Embryo Vascular Geometries Using Non-invasive High-Frequency Ultrasound for Computational Fluid Dynamics Studies.

    PubMed

    Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai

    2015-11-01

    Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.

  14. Computational Study on Subdural Cortical Stimulation - The Influence of the Head Geometry, Anisotropic Conductivity, and Electrode Configuration

    PubMed Central

    Kim, Donghyeon; Seo, Hyeon; Kim, Hyoung-Ihl; Jun, Sung Chan

    2014-01-01

    Subdural cortical stimulation (SuCS) is a method used to inject electrical current through electrodes beneath the dura mater, and is known to be useful in treating brain disorders. However, precisely how SuCS must be applied to yield the most effective results has rarely been investigated. For this purpose, we developed a three-dimensional computational model that represents an anatomically realistic brain model including an upper chest. With this computational model, we investigated the influence of stimulation amplitudes, electrode configurations (single or paddle-array), and white matter conductivities (isotropy or anisotropy). Further, the effects of stimulation were compared with two other computational models, including an anatomically realistic brain-only model and the simplified extruded slab model representing the precentral gyrus area. The results of voltage stimulation suggested that there was a synergistic effect with the paddle-array due to the use of multiple electrodes; however, a single electrode was more efficient with current stimulation. The conventional model (simplified extruded slab) far overestimated the effects of stimulation with both voltage and current by comparison to our proposed realistic upper body model. However, the realistic upper body and full brain-only models demonstrated similar stimulation effects. In our investigation of the influence of anisotropic conductivity, model with a fixed ratio (1∶10) anisotropic conductivity yielded deeper penetration depths and larger extents of stimulation than others. However, isotropic and anisotropic models with fixed ratios (1∶2, 1∶5) yielded similar stimulation effects. Lastly, whether the reference electrode was located on the right or left chest had no substantial effects on stimulation. PMID:25229673

  15. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    SciTech Connect

    Granovsky, Alexander A.

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  16. Ab initio computation of semiempirical π-electron methods. V. Geometry dependence of H nu π-electron effective integrals

    NASA Astrophysics Data System (ADS)

    Martin, Charles H.; Freed, Karl F.

    1996-07-01

    The ab initio effective valence shell Hamiltonian (Hν) provides ab initio analogs of the correlated π-electron integrals which should appear in the traditional Pariser-Parr-Pople (PPP) semiempirical π-electron theory. In our continuing studies of the ab initio basis of an improved PPP theory, we examine the geometry dependence of the correlated Hν π-electron effective integrals (also called parameters) for the linear polyenes, ethylene, the allyl radical, trans-butadiene, and hexatriene, and the cyclic polyenes, cyclobutadiene and benzene. We find particularly interesting features for each of the true π-electron parameters corresponding to the PPP αi, βi,j, and γi,j integrals. First, the one-electron, two-center resonance integrals βi,j differ from the so-called ``theoretical'' values by roughly a constant shift of 0.3-0.4 eV for nearest neighbors i and j and not at all for more distant neighbors. Second, the correlated αi parameters conform to the standard point charge model fairly well, except the slopes and intercepts lack the transferability typically ascribed to them. A more accurate PPP model therefore must model the one-center, one-electron interactions more carefully. Finally, the effective Coloumb interactions γi,j follow the standard Mataga-Nishimoto distance dependence quite well for the linear polyenes, although there is a small breakdown of transferability due to long range correlation effects. For instance, the hexatriene γ1,2 is 0.5 eV smaller than the ethylene γ1,2 even when the C1=C2 bond lengths are identical. Additionally, the set of γi,j for the cyclic polyenes is not even a single function of Ri,j, a feature reflecting the subtle contributions of electron correlation to the ab initio γi,j. However, plots of γ-1i,j vs Ri,j display some unforeseen regularity which may prove useful in improving current semiempirical models for cyclic polyenes.

  17. An adaptive Cartesian grid generation method for Dirty geometry

    NASA Astrophysics Data System (ADS)

    Wang, Z. J.; Srinivasan, Kumar

    2002-07-01

    Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright

  18. Developing framework to constrain the geometry of the seismic rupture plane on subduction interfaces a priori - A probabilistic approach

    USGS Publications Warehouse

    Hayes, G.P.; Wald, D.J.

    2009-01-01

    A key step in many earthquake source inversions requires knowledge of the geometry of the fault surface on which the earthquake occurred. Our knowledge of this surface is often uncertain, however, and as a result fault geometry misinterpretation can map into significant error in the final temporal and spatial slip patterns of these inversions. Relying solely on an initial hypocentre and CMT mechanism can be problematic when establishing rupture characteristics needed for rapid tsunami and ground shaking estimates. Here, we attempt to improve the quality of fast finite-fault inversion results by combining several independent and complementary data sets to more accurately constrain the geometry of the seismic rupture plane of subducting slabs. Unlike previous analyses aimed at defining the general form of the plate interface, we require mechanisms and locations of the seismicity considered in our inversions to be consistent with their occurrence on the plate interface, by limiting events to those with well-constrained depths and with CMT solutions indicative of shallow-dip thrust faulting. We construct probability density functions about each location based on formal assumptions of their depth uncertainty and use these constraints to solve for the ‘most-likely’ fault plane. Examples are shown for the trench in the source region of the Mw 8.6 Southern Sumatra earthquake of March 2005, and for the Northern Chile Trench in the source region of the November 2007 Antofagasta earthquake. We also show examples using only the historic catalogues in regions without recent great earthquakes, such as the Japan and Kamchatka Trenches. In most cases, this method produces a fault plane that is more consistent with all of the data available than is the plane implied by the initial hypocentre and CMT mechanism. Using the aggregated data sets, we have developed an algorithm to rapidly determine more accurate initial fault plane geometries for source inversions of future

  19. Computational approaches to selecting and optimising targets for structural biology

    PubMed Central

    Overton, Ian M.; Barton, Geoffrey J.

    2011-01-01

    Selection of protein targets for study is central to structural biology and may be influenced by numerous factors. A key aim is to maximise returns for effort invested by identifying proteins with the balance of biophysical properties that are conducive to success at all stages (e.g. solubility, crystallisation) in the route towards a high resolution structural model. Selected targets can be optimised through construct design (e.g. to minimise protein disorder), switching to a homologous protein, and selection of experimental methodology (e.g. choice of expression system) to prime for efficient progress through the structural proteomics pipeline. Here we discuss computational techniques in target selection and optimisation, with more detailed focus on tools developed within the Scottish Structural Proteomics Facility (SSPF); namely XANNpred, ParCrys, OB-Score (target selection) and TarO (target optimisation). TarO runs a large number of algorithms, searching for homologues and annotating the pool of possible alternative targets. This pool of putative homologues is presented in a ranked, tabulated format and results are also visualised as an automatically generated and annotated multiple sequence alignment. The target selection algorithms each predict the propensity of a selected protein target to progress through the experimental stages leading to diffracting crystals. This single predictor approach has advantages for target selection, when compared with an approach using two or more predictors that each predict for success at a single experimental stage. The tools described here helped SSPF achieve a high (21%) success rate in progressing cloned targets to diffraction-quality crystals. PMID:21906678

  20. Lexical is as lexical does: computational approaches to lexical representation

    PubMed Central

    Woollams, Anna M.

    2015-01-01

    In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204