Poikela, Paula; Ruokamo, Heli; Teräs, Marianne
2015-02-01
Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.
5 CFR 831.703 - Computation of annuities for part-time service.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...
NASA Astrophysics Data System (ADS)
Lazanja, David; Boozer, Allen
2006-10-01
Given the total magnetic field on a toroidal plasma surface, a method for decomposing the field into a part due to internal currents (often the plasma) and a part due to external currents is presented. The method exploits Laplace theory which is valid in the vacuum region between the plasma surface and the chamber walls. The method is developed for the full three dimensional case which is necessary for studying stellarator plasma configurations. A change in the plasma shape is produced by the total normal field perturbation on the plasma surface. This method allows a separation of the total normal field perturbation into a part produced by external currents and a part produced by the plasma response. There are immediate applications to coil design. The computational procedure is based on Merkel's 1986 work on vacuum field computations. Several test cases are presented for toroidal surfaces which verify the method and computational robustness of the code.
Making Ceramic/Polymer Parts By Extrusion Stereolithography
NASA Technical Reports Server (NTRS)
Stuffle, Kevin; Mulligan, A.; Creegan, P.; Boulton, J. M.; Lombardi, J. L.; Calvert, P. D.
1996-01-01
Extrusion stereolithography developmental method of computer-controlled manufacturing of objects out of ceramic/polymer composite materials. Computer-aided design/computer-aided manufacturing (CAD/CAM) software used to create image of desired part and translate image into motion commands for combination of mechanisms moving resin dispenser. Extrusion performed in coordination with motion of dispenser so buildup of extruded material takes on size and shape of desired part. Part thermally cured after deposition.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
ERIC Educational Resources Information Center
Gökmen, Ömer Faruk; Duman, Ibrahim; Akgün, Özcan Erkan
2018-01-01
The purpose of this study is to investigate teachers' views about the use of tablet computers distributed as a part of the FATIH (Movement for Enhancing Opportunities and Improving Technology) Project. In this study, the case study method, one of the qualitative research methods, was used. The participants were 20 teachers from various fields…
12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...
12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...
12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...
Efficient computational methods to study new and innovative signal detection techniques in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.
1991-01-01
The purpose of the research reported here is to provide a rapid computational method for computing various statistical parameters associated with overlapped Hann spectra. These results are important for the Targeted Search part of the Search for ExtraTerrestrial Intelligence (SETI) Microwave Observing Project.
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Improved Collision-Detection Method for Robotic Manipulator
NASA Technical Reports Server (NTRS)
Leger, Chris
2003-01-01
An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.
Computer program to perform cost and weight analysis of transport aircraft. Volume 1: Summary
NASA Technical Reports Server (NTRS)
1973-01-01
A digital computer program for evaluating the weight and costs of advanced transport designs was developed. The resultant program, intended for use at the preliminary design level, incorporates both batch mode and interactive graphics run capability. The basis of the weight and cost estimation method developed is a unique way of predicting the physical design of each detail part of a vehicle structure at a time when only configuration concept drawings are available. In addition, the technique relies on methods to predict the precise manufacturing processes and the associated material required to produce each detail part. Weight data are generated in four areas of the program. Overall vehicle system weights are derived on a statistical basis as part of the vehicle sizing process. Theoretical weights, actual weights, and the weight of the raw material to be purchased are derived as part of the structural synthesis and part definition processes based on the computed part geometry.
NASA Workshop on Computational Structural Mechanics 1987, part 1
NASA Technical Reports Server (NTRS)
Sykes, Nancy P. (Editor)
1989-01-01
Topics in Computational Structural Mechanics (CSM) are reviewed. CSM parallel structural methods, a transputer finite element solver, architectures for multiprocessor computers, and parallel eigenvalue extraction are among the topics discussed.
Symbol Tables and Branch Tables: Linking Applications Together
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
This document explores the computer techniques used to execute software whose parts are compiled and linked separately. The computer techniques include using a branch table or indirect address table to connect the parts. Methods of storing the information in data structures are discussed as well as differences between C and C++.
Imaging systems and methods for obtaining and using biometric information
McMakin, Douglas L [Richland, WA; Kennedy, Mike O [Richland, WA
2010-11-30
Disclosed herein are exemplary embodiments of imaging systems and methods of using such systems. In one exemplary embodiment, one or more direct images of the body of a clothed subject are received, and a motion signature is determined from the one or more images. In this embodiment, the one or more images show movement of the body of the subject over time, and the motion signature is associated with the movement of the subject's body. In certain implementations, the subject can be identified based at least in part on the motion signature. Imaging systems for performing any of the disclosed methods are also disclosed herein. Furthermore, the disclosed imaging, rendering, and analysis methods can be implemented, at least in part, as one or more computer-readable media comprising computer-executable instructions for causing a computer to perform the respective methods.
Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.
2017-10-24
The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Method and System for Determining Relative Displacement and Heading for Navigation
NASA Technical Reports Server (NTRS)
Sheikh, Suneel Ismail (Inventor); Pines, Darryll J. (Inventor); Conroy, Joseph Kim (Inventor); Spiridonov, Timofey N. (Inventor)
2015-01-01
A system and method for determining a location of a mobile object is provided. The system determines the location of the mobile object by determining distances between a plurality of sensors provided on a first and second movable parts of the mobile object. A stride length, heading, and separation distance between the first and second movable parts are computed based on the determined distances and the location of the mobile object is determined based on the computed stride length, heading, and separation distance.
A rule based computer aided design system
NASA Technical Reports Server (NTRS)
Premack, T.
1986-01-01
A Computer Aided Design (CAD) system is presented which supports the iterative process of design, the dimensional continuity between mating parts, and the hierarchical structure of the parts in their assembled configuration. Prolog, an interactive logic programming language, is used to represent and interpret the data base. The solid geometry representing the parts is defined in parameterized form using the swept volume method. The system is demonstrated with a design of a spring piston.
ERIC Educational Resources Information Center
Armoni, Michal; Gal-Ezer, Judith
2005-01-01
When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…
ERIC Educational Resources Information Center
Garner, Stuart
2009-01-01
This paper reports on the findings from a quantitative research study into the use of a software tool that was built to support a part-complete solution method (PCSM) for the learning of computer programming. The use of part-complete solutions to programming problems is one of the methods that can be used to reduce the cognitive load that students…
On computing closed forms for summations. [polynomials and rational functions
NASA Technical Reports Server (NTRS)
Moenck, R.
1977-01-01
The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
ICAN/PART: Particulate composite analyzer, user's manual and verification studies
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.
1996-01-01
A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.
Steady and unsteady three-dimensional transonic flow computations by integral equation method
NASA Technical Reports Server (NTRS)
Hu, Hong
1994-01-01
This is the final technical report of the research performed under the grant: NAG1-1170, from the National Aeronautics and Space Administration. The report consists of three parts. The first part presents the work on unsteady flows around a zero-thickness wing. The second part presents the work on steady flows around non-zero thickness wings. The third part presents the massively parallel processing implementation and performance analysis of integral equation computations. At the end of the report, publications resulting from this grant are listed and attached.
Quantum Mechanical Study of Atoms and Molecules
NASA Technical Reports Server (NTRS)
Sahni, R. C.
1961-01-01
This paper, following a brief introduction, is divided into five parts. Part I outlines the theory of the molecular orbital method for the ground, ionized and excited states of molecules. Part II gives a brief summary of the interaction integrals and their tabulation. Part III outlines an automatic program designed for the computation of various states of molecules. Part IV gives examples of the study of ground, ionized and excited states of CO, BH and N2 where the program of automatic computation and molecular integrals have been utilized. Part V enlists some special problems of Molecular Quantum Mechanics are being tackled at New York University.
New method of processing heat treatment experiments with numerical simulation support
NASA Astrophysics Data System (ADS)
Kik, T.; Moravec, J.; Novakova, I.
2017-08-01
In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.
Computer-aided drug discovery.
Bajorath, Jürgen
2015-01-01
Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.
Recursive computation of mutual potential between two polyhedra
NASA Astrophysics Data System (ADS)
Hirabayashi, Masatoshi; Scheeres, Daniel J.
2013-11-01
Recursive computation of mutual potential, force, and torque between two polyhedra is studied. Based on formulations by Werner and Scheeres (Celest Mech Dyn Astron 91:337-349, 2005) and Fahnestock and Scheeres (Celest Mech Dyn Astron 96:317-339, 2006) who applied the Legendre polynomial expansion to gravity interactions and expressed each order term by a shape-dependent part and a shape-independent part, this paper generalizes the computation of each order term, giving recursive relations of the shape-dependent part. To consider the potential, force, and torque, we introduce three tensors. This method is applicable to any multi-body systems. Finally, we implement this recursive computation to simulate the dynamics of a two rigid-body system that consists of two equal-sized parallelepipeds.
Computer-Assisted College Administration. Final Report.
ERIC Educational Resources Information Center
Punga, V.
Rensselaer Polytechnic Institute of Connecticut offered a part-time training program "Computer-Assisted-College-Administration" during the academic year 1969-70. Participants were trained in the utilization of computer-assisted methods in dealing with the common tasks of college administration, the problems of college development and…
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less
Light aircraft lift, drag, and moment prediction: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.
1975-01-01
The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.
NASA Workshop on Computational Structural Mechanics 1987, part 2
NASA Technical Reports Server (NTRS)
Sykes, Nancy P. (Editor)
1989-01-01
Advanced methods and testbed/simulator development topics are discussed. Computational Structural Mechanics (CSM) testbed architecture, engine structures simulation, applications to laminate structures, and a generic element processor are among the topics covered.
NASA Workshop on Computational Structural Mechanics 1987, part 3
NASA Technical Reports Server (NTRS)
Sykes, Nancy P. (Editor)
1989-01-01
Computational Structural Mechanics (CSM) topics are explored. Algorithms and software for nonlinear structural dynamics, concurrent algorithms for transient finite element analysis, computational methods and software systems for dynamics and control of large space structures, and the use of multi-grid for structural analysis are discussed.
Fuels and fire in land-management planning: Part 3. Costs and losses for management options.
Wayne G. Maxwell; David V. Sandberg; Franklin R. Ward
1983-01-01
An approach is illustrated for computing expected costs of fire protection; fuel treatment; fire suppression; damage values; and percent of area lost to wildfire for a management or rotation cycle. Input is derived from Part 1, a method for collecting and classifying the total fuel complex, and Part 2, a method for appraising and rating probable fire behavior. This...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-15
...) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer EDI is not possible, FAS will use an alternative EDI method allowing the Contractor to receive orders by facsimile transmission. Subject to the Contractor's agreement, other agencies may place orders by EDI. * * * * * (g) The...
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
COMSAC: Computational Methods for Stability and Control. Part 1
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?
A new method for computing the gyrocenter orbit in the tokamak configuration
NASA Astrophysics Data System (ADS)
Xu, Yingfeng
2013-10-01
Gyrokinetic theory is an important tool for studying the long-time behavior of magnetized plasmas in Tokamaks. The gyrocenter trajectory determined by the gyrocenter equations of motion can be computed by using a special kind of the Lie-transform perturbation method. The corresponding Lie-transform called I-transform makes that the transformed equations of motion have the same form as the unperturbed ones. The gyrocenter trajectory in short time is divided into two parts. One is along the unperturbed orbit. The other one, which is related to perturbation, is determined by the I-transform generating vector. The numerical gyrocenter orbit code based on this new method has been developed in the tokamak configuration and benchmarked with the other orbit code in some simple cases. Furthermore, it is clearly demonstrated that this new method for computing gyrocenter orbit is equivalent to the gyrocenter Hamilton equations of motion up to the second order in timestep. The new method can be applied to the gyrokinetic simulation. The gyrocenter orbit of the unperturbed part determined by the equilibrium fields can be computed previously in the gyrokinetic simulation, and the corresponding time consumption is neglectable.
Conjugate Gradient Algorithms For Manipulator Simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1991-01-01
Report discusses applicability of conjugate-gradient algorithms to computation of forward dynamics of robotic manipulators. Rapid computation of forward dynamics essential to teleoperation and other advanced robotic applications. Part of continuing effort to find algorithms meeting requirements for increased computational efficiency and speed. Method used for iterative solution of systems of linear equations.
pyro: Python-based tutorial for computational methods for hydrodynamics
NASA Astrophysics Data System (ADS)
Zingale, Michael
2015-07-01
pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.
Construction of Orthonormal Wavelets Using Symbolic Algebraic Methods
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2009-09-01
Our contribution is concerned with the solution of nonlinear algebraic equations systems arising from the computation of scaling coefficients of orthonormal wavelets with compact support. Specifically Daubechies wavelets, symmlets, coiflets, and generalized coiflets. These wavelets are defined as a solution of equation systems which are partly linear and partly nonlinear. The idea of presented methods consists in replacing those equations for scaling coefficients by equations for scaling moments. It enables us to eliminate some quadratic conditions in the original system and then simplify it. The simplified system is solved with the aid of the Gröbner basis method. The advantage of our approach is that in some cases, it provides all possible solutions and these solutions can be computed to arbitrary precision. For small systems, we are even able to find explicit solutions. The computation was carried out by symbolic algebra software Maple.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Computational models for predicting interactions with membrane transporters.
Xu, Y; Shen, Q; Liu, X; Lu, J; Li, S; Luo, C; Gong, L; Luo, X; Zheng, M; Jiang, H
2013-01-01
Membrane transporters, including two members: ATP-binding cassette (ABC) transporters and solute carrier (SLC) transporters are proteins that play important roles to facilitate molecules into and out of cells. Consequently, these transporters can be major determinants of the therapeutic efficacy, toxicity and pharmacokinetics of a variety of drugs. Considering the time and expense of bio-experiments taking, research should be driven by evaluation of efficacy and safety. Computational methods arise to be a complementary choice. In this article, we provide an overview of the contribution that computational methods made in transporters field in the past decades. At the beginning, we present a brief introduction about the structure and function of major members of two families in transporters. In the second part, we focus on widely used computational methods in different aspects of transporters research. In the absence of a high-resolution structure of most of transporters, homology modeling is a useful tool to interpret experimental data and potentially guide experimental studies. We summarize reported homology modeling in this review. Researches in computational methods cover major members of transporters and a variety of topics including the classification of substrates and/or inhibitors, prediction of protein-ligand interactions, constitution of binding pocket, phenotype of non-synonymous single-nucleotide polymorphisms, and the conformation analysis that try to explain the mechanism of action. As an example, one of the most important transporters P-gp is elaborated to explain the differences and advantages of various computational models. In the third part, the challenges of developing computational methods to get reliable prediction, as well as the potential future directions in transporter related modeling are discussed.
Ultrasonic material hardness depth measurement
Good, M.S.; Schuster, G.J.; Skorpik, J.R.
1997-07-08
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Transonic Unsteady Aerodynamics and Aeroelasticity 1987, part 1
NASA Technical Reports Server (NTRS)
Bland, Samuel R. (Compiler)
1989-01-01
Computational fluid dynamics methods have been widely accepted for transonic aeroelastic analysis. Previously, calculations with the TSD methods were used for 2-D airfoils, but now the TSD methods are applied to the aeroelastic analysis of the complete aircraft. The Symposium papers are grouped into five subject areas, two of which are covered in this part: (1) Transonic Small Disturbance (TSD) theory for complete aircraft configurations; and (2) Full potential and Euler equation methods.
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
ERIC Educational Resources Information Center
Kashef, Ali E.
A study was conducted to determine the effectiveness of teaching multiview and pictorial drawing using traditional methods and using computer-aided drafting (CAD). Research used a quasi-experimental design; subjects were 37 full- and part-time undergraduate students in industrial technology or technology education courses. The students were…
Estimates of ground-water recharge based on streamflow-hydrograph methods: Pennsylvania
Risser, Dennis W.; Conger, Randall W.; Ulrich, James E.; Asmussen, Michael P.
2005-01-01
This study, completed by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Conservation and Natural Resources, Bureau of Topographic and Geologic Survey (T&GS), provides estimates of ground-water recharge for watersheds throughout Pennsylvania computed by use of two automated streamflow-hydrograph-analysis methods--PART and RORA. The PART computer program uses a hydrograph-separation technique to divide the streamflow hydrograph into components of direct runoff and base flow. Base flow can be a useful approximation of recharge if losses and interbasin transfers of ground water are minimal. The RORA computer program uses a recession-curve displacement technique to estimate ground-water recharge from each storm period indicated on the streamflow hydrograph. Recharge estimates were made using streamflow records collected during 1885-2001 from 197 active and inactive streamflow-gaging stations in Pennsylvania where streamflow is relatively unaffected by regulation. Estimates of mean-annual recharge in Pennsylvania computed by the use of PART ranged from 5.8 to 26.6 inches; estimates from RORA ranged from 7.7 to 29.3 inches. Estimates from the RORA program were about 2 inches greater than those derived from the PART program. Mean-monthly recharge was computed from the RORA program and was reported as a percentage of mean-annual recharge. On the basis of this analysis, the major ground-water recharge period in Pennsylvania typically is November through May; the greatest monthly recharge typically occurs in March.
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
Computation of aeroelastic characteristics and stress-strained state of parachutes
NASA Astrophysics Data System (ADS)
Dneprov, Igor'v.
The paper presents computation results of the stress-strained state and aeroelastic characteristics of different types of parachutes in the process of their interaction with a flow. Simulation of the aerodynamic part of the aeroelastic problem is based on the discrete vortex method, while the elastic part of the problem is solved by employing either the finite element method, or the finite difference method. The research covers the following problems of the axisymmetric parachutes dynamic aeroelasticity: parachute inflation, forebody influence on the aerodynamic characteristics of the object-parachute system, parachute disreefing, parachute inflation in the presence of the engagement parachute. The paper also presents the solution of the spatial problem of static aeroelasticity for a single-envelope ram-air parachute. Some practical recommendations are suggested.
Computational prediction of chemical reactions: current status and outlook.
Engkvist, Ola; Norrby, Per-Ola; Selmi, Nidhal; Lam, Yu-Hong; Peng, Zhengwei; Sherer, Edward C; Amberg, Willi; Erhard, Thomas; Smyth, Lynette A
2018-06-01
Over the past few decades, various computational methods have become increasingly important for discovering and developing novel drugs. Computational prediction of chemical reactions is a key part of an efficient drug discovery process. In this review, we discuss important parts of this field, with a focus on utilizing reaction data to build predictive models, the existing programs for synthesis prediction, and usage of quantum mechanics and molecular mechanics (QM/MM) to explore chemical reactions. We also outline potential future developments with an emphasis on pre-competitive collaboration opportunities. Copyright © 2018 Elsevier Ltd. All rights reserved.
Separation of the Magnetic Field into Parts Produced by Internal and External Sources
NASA Astrophysics Data System (ADS)
Lazanja, David
2005-10-01
Given the total magnetic field on a toroidal plasma surface, a method for decomposing the field into a part due to internal currents (often the plasma) and a part due to external currents is presented. The decomposition exploits Laplace theory which is valid in the vacuum region between the plasma surface and the chamber walls. The method does not assume toroidal symmetry, and it is partly based on Merkel's 1986 work on vacuum field computations. A change in the plasma shape is produced by the total normal field perturbation on the plasma surface. This method allows a separation of the total normal field perturbation into a part produced by external currents and a part produced by the plasma response.
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
Fixtureless nonrigid part inspection using depth cameras
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2016-10-01
In automobile industry, flexible thin shell parts are used to cover car body. Such parts could have a different shape in a free state than the design model due to dimensional variation, gravity loads and residual strains. Special inspection fixtures are generally indispensable for geometric inspection. Recently, some researchers have proposed fixtureless nonridged inspect methods using intrinsic geometry or virtual spring-mass system, based on some assumptions about deformation between Free State shape and nominal CAD shape. In this paper, we propose a new fixtureless method to inspect flexible parts with a depth camera, which is efficient and low computational complexity. Unlike traditional method, we gather two point cloud set of the manufactured part in two different states, and make correspondences between them and one of them to the CAD model. The manufacturing defects can be derived from the correspondences. Finite element method (FEM) disappears in our method. Experimental evaluation of the proposed method is presented.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Potential applications of computational fluid dynamics to biofluid analysis
NASA Technical Reports Server (NTRS)
Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.
1988-01-01
Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.
[Axial computer tomography of the neurocranium (author's transl)].
Stöppler, L
1977-05-27
Computer tomography (CT), a new radiographic examination technique, is very highly efficient, for it has high informative content with little stress for the patient. In contrast to the conventional X-ray technology, CT succeeds, by direct presentation of the structure of the soft parts, in obtaining information which comes close to that of macroscopic neuropathology. The capacity and limitations of the method at the present stage of development are reported. Computer tomography cannot displace conventional neuroradiological methods of investigation, although it is rightly presented as a screening method and helps towards selective use. Indications, technical integration and handling of CT are prerequisites for the exhaustive benefit of the excellent new technique.
Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector
NASA Astrophysics Data System (ADS)
Wang, Hongbin; Feng, Yinhan; Cheng, Liang
2018-03-01
Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.
Computers and terminals as an aid to international technology transfer
NASA Technical Reports Server (NTRS)
Sweeney, W. T.
1974-01-01
As technology transfer becomes more popular and proves to be an economical method for companies of all sizes to take advantage of a tremendous amount of new and available technology from sources all over the world, the introduction of computers and terminals into the international technology transfer process is proving to be a successful method for companies to take part in this beneficial approach to new business opportunities.
Search and retrieval of office files using dBASE 3
NASA Technical Reports Server (NTRS)
Breazeale, W. L.; Talley, C. R.
1986-01-01
Described is a method of automating the office files retrieval process using a commercially available software package (dBASE III). The resulting product is a menu-driven computer program which requires no computer skills to operate. One part of the document is written for the potential user who has minimal computer experience and uses sample menu screens to explain the program; while a second part is oriented towards the computer literate individual and includes rather detailed descriptions of the methodology and search routines. Although much of the programming techniques are explained, this document is not intended to be a tutorial on dBASE III. It is hoped that the document will serve as a stimulus for other applications of dBASE III.
Computer-Based Instruction in Dietetics Education.
ERIC Educational Resources Information Center
Schroeder, Lois; Kent, Phyllis
1982-01-01
Details the development and system design of a computer-based instruction (CBI) program designed to provide tutorial training in diet modification as part of renal therapy and provides the results of a study that compared the effectiveness of the CBI program with the traditional lecture/laboratory method. (EAO)
Determining casting defects in near-net shape casting aluminum parts by computed tomography
NASA Astrophysics Data System (ADS)
Li, Jiehua; Oberdorfer, Bernd; Habe, Daniel; Schumacher, Peter
2018-03-01
Three types of near-net shape casting aluminum parts were investigated by computed tomography to determine casting defects and evaluate quality. The first, second, and third parts were produced by low-pressure die casting (Al-12Si-0.8Cu-0.5Fe-0.9Mg-0.7Ni-0.2Zn alloy), die casting (A356, Al-7Si-0.3Mg), and semi-solid casting (A356, Al-7Si-0.3Mg), respectively. Unlike die casting (second part), low-pressure die casting (first part) significantly reduced the formation of casting defects (i.e., porosity) due to its smooth filling and solidification under pressure. No significant casting defect was observed in the third part, and this absence of defects indicates that semi-solid casting could produce high-quality near-net shape casting aluminum parts. Moreover, casting defects were mostly distributed along the eutectic grain boundaries. This finding reveals that refinement of eutectic grains is necessary to optimize the distribution of casting defects and reduce their size. This investigation demonstrated that computed tomography is an efficient method to determine casting defects in near-net shape casting aluminum parts.
A hybridized method for computing high-Reynolds-number hypersonic flow about blunt bodies
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Hamilton, H. H., II
1979-01-01
A hybridized method for computing the flow about blunt bodies is presented. In this method the flow field is split into its viscid and inviscid parts. The forebody flow field about a parabolic body is computed. For the viscous solution, the Navier-Stokes equations are solved on orthogonal parabolic coordinates using explicit finite differencing. The inviscid flow is determined by using a Moretti type scheme in which the Euler equations are solved, using explicit finite differences, on a nonorthogonal coordinate system which uses the bow shock as an outer boundary. The two solutions are coupled along a common data line and are marched together in time until a converged solution is obtained. Computed results, when compared with experimental and analytical results, indicate the method works well over a wide range of Reynolds numbers and Mach numbers.
Solar Power Tower Integrated Layout and Optimization Tool | Concentrating
methods to reduce the overall computational burden while generating accurate and precise results. These methods have been developed as part of the U.S. Department of Energy (DOE) SunShot Initiative research
Structural neuroimaging in neuropsychology: History and contemporary applications.
Bigler, Erin D
2017-11-01
Neuropsychology's origins began long before there were any in vivo methods to image the brain. That changed with the advent of computed tomography in the 1970s and magnetic resonance imaging in the early 1980s. Now computed tomography and magnetic resonance imaging are routinely a part of neuropsychological investigations with an increasing number of sophisticated methods for image analysis. This review examines the history of neuroimaging utilization in neuropsychological investigations, highlighting the basic methods that go into image quantification and the various metrics that can be derived. Neuroimaging methods and limitations for identify what constitutes a lesion are discussed. Likewise, the influence of various demographic and developmental factors that influence quantification of brain structure are reviewed. Neuroimaging is an integral part of 21st Century neuropsychology. The importance of neuroimaging to advancing neuropsychology is emphasized. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Population Education Accessions Lists, July-December 1986.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
Part I of this resource guide contains listings of instructional materials, computer-assisted instructions, classroom activities and teaching methods. Part II deals with the knowledge base of population education. These publications are divided into 11 topics including: (1) demography; (2) documentation; (3) education (including environmental,…
NASA Astrophysics Data System (ADS)
Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao
2015-03-01
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku
2011-03-01
The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine. An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall. We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mota, Alejandro; Tezaur, Irina; Alleman, Coleman
This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.
Mota, Alejandro; Tezaur, Irina; Alleman, Coleman
2017-12-06
This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.
Modeling Structure-Function Relationships in Synthetic DNA Sequences using Attribute Grammars
Cai, Yizhi; Lux, Matthew W.; Adam, Laura; Peccoud, Jean
2009-01-01
Recognizing that certain biological functions can be associated with specific DNA sequences has led various fields of biology to adopt the notion of the genetic part. This concept provides a finer level of granularity than the traditional notion of the gene. However, a method of formally relating how a set of parts relates to a function has not yet emerged. Synthetic biology both demands such a formalism and provides an ideal setting for testing hypotheses about relationships between DNA sequences and phenotypes beyond the gene-centric methods used in genetics. Attribute grammars are used in computer science to translate the text of a program source code into the computational operations it represents. By associating attributes with parts, modifying the value of these attributes using rules that describe the structure of DNA sequences, and using a multi-pass compilation process, it is possible to translate DNA sequences into molecular interaction network models. These capabilities are illustrated by simple example grammars expressing how gene expression rates are dependent upon single or multiple parts. The translation process is validated by systematically generating, translating, and simulating the phenotype of all the sequences in the design space generated by a small library of genetic parts. Attribute grammars represent a flexible framework connecting parts with models of biological function. They will be instrumental for building mathematical models of libraries of genetic constructs synthesized to characterize the function of genetic parts. This formalism is also expected to provide a solid foundation for the development of computer assisted design applications for synthetic biology. PMID:19816554
Interactive computer aided technology, evolution in the design/manufacturing process
NASA Technical Reports Server (NTRS)
English, C. H.
1975-01-01
A powerful computer-operated three dimensional graphic system and associated auxiliary computer equipment used in advanced design, production design, and manufacturing was described. This system has made these activities more productive than when using older and more conventional methods to design and build aerospace vehicles. With the use of this graphic system, designers are now able to define parts using a wide variety of geometric entities, define parts as fully surface 3-dimensional models as well as "wire-frame" models. Once geometrically defined, the designer is able to take section cuts of the surfaced model and automatically determine all of the section properties of the planar cut, lightpen detect all of the surface patches and automatically determine the volume and weight of the part. Further, his designs are defined mathematically at a degree of accuracy never before achievable.
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.
Introducing the Boundary Element Method with MATLAB
ERIC Educational Resources Information Center
Ang, Keng-Cheng
2008-01-01
The boundary element method provides an excellent platform for learning and teaching a computational method for solving problems in physical and engineering science. However, it is often left out in many undergraduate courses as its implementation is deemed to be difficult. This is partly due to the perception that coding the method requires…
Analyzing the security of an existing computer system
NASA Technical Reports Server (NTRS)
Bishop, M.
1986-01-01
Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.
Manufacturing Methods and Technology Project Summary Reports
1985-06-01
Computer -Aided Design (CAD)/ Computer -Aided Manufacturing (CAM) Process for the Production of Cold Forged Gears Project 483 6121 - Robotic Welding and...Caliber Projectile Bodies Project 682 8370 - Automatic Inspection and 1-I1 Process Control of Weapons Parts Manufacturing METALS Project 181 7285 - Cast...designed for use on each project. Experience suggested that a general purpose computer interface might be designed that could be used on any project
Study of effects of injector geometry on fuel-air mixing and combustion
NASA Technical Reports Server (NTRS)
Bangert, L. H.; Roach, R. L.
1977-01-01
An implicit finite-difference method has been developed for computing the flow in the near field of a fuel injector as part of a broader study of the effects of fuel injector geometry on fuel-air mixing and combustion. Detailed numerical results have been obtained for cases of laminar and turbulent flow without base injection, corresponding to the supersonic base flow problem. These numerical results indicated that the method is stable and convergent, and that significant savings in computer time can be achieved, compared with explicit methods.
Finding the Hook: Computer Science Education in Elementary Contexts
ERIC Educational Resources Information Center
Ozturk, Zehra; Dooley, Caitlin McMunn; Welch, Meghan
2018-01-01
The purpose of this study was to investigate how elementary teachers with little knowledge of computer science (CS) and project-based learning (PBL) experienced integrating CS through PBL as a part of a standards-based elementary curriculum in Grades 3-5. The researchers used qualitative constant comparison methods on field notes and reflections…
IDEA Technical Report No. 2. Description of Data Base, 1976-77.
ERIC Educational Resources Information Center
Cashin, William E.; Slawson, Hugh M.
The data and computational procedures used by the IDEA system at Kansas State University (during the 1976-77 academic year) to interpret ratings of teacher performance are described in this technical report. The computations for each of the seven parts (evaluation, course description, students' self ratings, methods, additional questions,…
IDEA Technical Report No. 3. Description of Data Base, 1977-78.
ERIC Educational Resources Information Center
Cashin, William E.; Slawson, Hugh M.
The data and computational procedures used by the IDEA System during the 1977-78 academic year at Kansas State University to interpret ratings of teacher performance are described in this technical report. The computations for each of the seven parts (evaluation, course description, students' self-ratings, methods, additional questions, diagnostic…
Manual of phosphoric acid fuel cell power plant cost model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.
Richardson, D
1997-12-01
This study compared student perceptions and learning outcomes of computer-assisted instruction against those of traditional didactic lectures. Components of Quantitative Circulatory Physiology (Biological Simulators) and Mechanical Properties of Active Muscle (Trinity Software) were used to teach regulation of tissue blood flow and muscle mechanics, respectively, in the course Medical Physiology. These topics were each taught, in part, by 1) standard didactic lectures, 2) computer-assisted lectures, and 3) computer laboratory assignment. Subjective evaluation was derived from a questionnaire assessing student opinions of the effectiveness of each method. Objective evaluation consisted of comparing scores on examination questions generated from each method. On a 1-10 scale, effectiveness ratings were higher (P < 0.0001) for the didactic lectures (7.7) compared with either computer-assisted lecture (3.8) or computer laboratory (4.2) methods. A follow-up discussion with representatives from the class indicated that students did not perceive computer instruction as being time effective. However, examination scores from computer laboratory questions (94.3%) were significantly higher compared with ones from either computer-assisted (89.9%; P < 0.025) or didactic (86.6%; P < 0.001) lectures. Thus computer laboratory instruction enhanced learning outcomes in medical physiology despite student perceptions to the contrary.
Code of Federal Regulations, 2014 CFR
2014-07-01
... rate and the sampling time. The concentration of SO2 in the ambient air is computed and expressed in... tetracetic acid disodium salt (EDTA) and phosphoric acid,(10, 12) and ozone by time delay.(10) Up to 60 µg Fe... requirements of section 7 of 40 CFR part 58, appendix E (Teflon ® or glass with residence time less than 20 sec...
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling time. The concentration of SO2 in the ambient air is computed and expressed in micrograms per... tetracetic acid disodium salt (EDTA) and phosphoric acid,(10, 12) and ozone by time delay.(10) Up to 60 µg Fe... requirements of section 7 of 40 CFR part 58, appendix E (Teflon ® or glass with residence time less than 20 sec...
Code of Federal Regulations, 2012 CFR
2012-07-01
... rate and the sampling time. The concentration of SO2 in the ambient air is computed and expressed in... tetracetic acid disodium salt (EDTA) and phosphoric acid,(10, 12) and ozone by time delay.(10) Up to 60 µg Fe... requirements of section 7 of 40 CFR part 58, appendix E (Teflon ® or glass with residence time less than 20 sec...
Code of Federal Regulations, 2013 CFR
2013-07-01
... rate and the sampling time. The concentration of SO2 in the ambient air is computed and expressed in... tetracetic acid disodium salt (EDTA) and phosphoric acid,(10, 12) and ozone by time delay.(10) Up to 60 µg Fe... requirements of section 7 of 40 CFR part 58, appendix E (Teflon ® or glass with residence time less than 20 sec...
Code of Federal Regulations, 2011 CFR
2011-07-01
... rate and the sampling time. The concentration of SO2 in the ambient air is computed and expressed in... tetracetic acid disodium salt (EDTA) and phosphoric acid,(10, 12) and ozone by time delay.(10) Up to 60 µg Fe... requirements of section 7 of 40 CFR part 58, appendix E (Teflon ® or glass with residence time less than 20 sec...
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations
NASA Technical Reports Server (NTRS)
Good, Brian S.
2003-01-01
We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.
Vlachopoulos, Lazaros; Lüthi, Marcel; Carrillo, Fabio; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp
2018-04-18
In computer-assisted reconstructive surgeries, the contralateral anatomy is established as the best available reconstruction template. However, existing intra-individual bilateral differences or a pathological, contralateral humerus may limit the applicability of the method. The aim of the study was to evaluate whether a statistical shape model (SSM) has the potential to predict accurately the pretraumatic anatomy of the humerus from the posttraumatic condition. Three-dimensional (3D) triangular surface models were extracted from the computed tomographic data of 100 paired cadaveric humeri without a pathological condition. An SSM was constructed, encoding the characteristic shape variations among the individuals. To predict the patient-specific anatomy of the proximal (or distal) part of the humerus with the SSM, we generated segments of the humerus of predefined length excluding the part to predict. The proximal and distal humeral prediction (p-HP and d-HP) errors, defined as the deviation of the predicted (bone) model from the original (bone) model, were evaluated. For comparison with the state-of-the-art technique, i.e., the contralateral registration method, we used the same segments of the humerus to evaluate whether the SSM or the contralateral anatomy yields a more accurate reconstruction template. The p-HP error (mean and standard deviation, 3.8° ± 1.9°) using 85% of the distal end of the humerus to predict the proximal humeral anatomy was significantly smaller (p = 0.001) compared with the contralateral registration method. The difference between the d-HP error (mean, 5.5° ± 2.9°), using 85% of the proximal part of the humerus to predict the distal humeral anatomy, and the contralateral registration method was not significant (p = 0.61). The restoration of the humeral length was not significantly different between the SSM and the contralateral registration method. SSMs accurately predict the patient-specific anatomy of the proximal and distal aspects of the humerus. The prediction errors of the SSM depend on the size of the healthy part of the humerus. The prediction of the patient-specific anatomy of the humerus is of fundamental importance for computer-assisted reconstructive surgeries.
Computational Predictions of the Performance Wright 'Bent End' Propellers
NASA Technical Reports Server (NTRS)
Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)
2002-01-01
Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
I. W. Ginsberg
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Design and Analysis Tool for External-Compression Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.
2012-01-01
A computational tool named SUPIN has been developed to design and analyze external-compression supersonic inlets for aircraft at cruise speeds from Mach 1.6 to 2.0. The inlet types available include the axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced Busemann inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flowfield is divided into parts to provide a framework for the geometry and aerodynamic modeling and the parts are defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick analysis. SUPIN provides inlet geometry in the form of coordinates and surface grids useable by grid generation methods for higher-fidelity computational fluid dynamics (CFD) analysis. SUPIN is demonstrated through a series of design studies and CFD analyses were performed to verify some of the analysis results.
Behaviour of Frictional Joints in Steel Arch Yielding Supports
NASA Astrophysics Data System (ADS)
Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel
2014-10-01
The loading capacity and ability of steel arch supports to accept deformations from the surrounding rock mass is influenced significantly by the function of the connections and in particular, the tightening of the bolts. This contribution deals with computer modelling of the yielding bolt connections for different torques to determine the load-bearing capacity of the connections. Another parameter that affects the loading capacity significantly is the value of the friction coefficient of the contacts between the elements of the joints. The authors investigated both the behaviour and conditions of the individual parts for three values of tightening moment and the relation between the value of screw tightening and load-bearing capacity of the connections for different friction coefficients. ANSYS software and the finite element method were used for the computer modelling. The solution is nonlinear because of the bi-linear material properties of steel and the large deformations. The geometry of the computer model was created from designs of all four parts of the structure. The calculation also defines the weakest part of the joint's structure based on stress analysis. The load was divided into two loading steps: the pre-tensioning of connecting bolts and the deformation loading corresponding to 50-mm slip of one support. The full Newton-Raphson method was chosen for the solution. The calculations were carried out on a computer at the Supercomputing Centre VSB-Technical University of Ostrava.
Analysis of New Composite Architectures
NASA Technical Reports Server (NTRS)
Whitcomb, John D.
1996-01-01
Efficient and accurate specialty finite elements methods to analyze textile composites were developed and are described. Textile composites present unique challenges to the analyst because of the large, complex 'microstructure'. The geometry of the microstructure is difficult to model and it introduces unusual free surface effects. The size of the microstructure complicates the use of traditional homogenization methods. The methods developed constitute considerable progress in addressing the modeling difficulties. The details of the methods and attended results obtained therefrom, are described in the various chapters included in Part 1 of the report. Specific conclusions and computer codes generated are included in Part 2 of the report.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Fan Flutter Computations Using the Harmonic Balance Method
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.
2009-01-01
An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.
Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin
2009-11-06
Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.
ERIC Educational Resources Information Center
Thomas, Shailendra Nelle
2010-01-01
Purpose, scope, and method of study: Although computer technology has been a part of the educational community for many years, it is still not used at its optimal capacity (Gosmire & Grady, 2007b; Trotter, 2007). While teachers were identified early as playing important roles in the success of technology implementation, principals were often…
ERIC Educational Resources Information Center
Hussmann, Katja; Grande, Marion; Meffert, Elisabeth; Christoph, Swetlana; Piefke, Martina; Willmes, Klaus; Huber, Walter
2012-01-01
Although generally accepted as an important part of aphasia assessment, detailed analysis of spontaneous speech is rarely carried out in clinical practice mostly due to time limitations. The Aachener Sprachanalyse (ASPA; Aachen Speech Analysis) is a computer-assisted method for the quantitative analysis of German spontaneous speech that allows for…
On One Unusual Method of Computation of Limits of Rational Functions in the Program Mathematica[R
ERIC Educational Resources Information Center
Hora, Jaroslav; Pech, Pavel
2005-01-01
Computing limits of functions is a traditional part of mathematical analysis which is very difficult for students. Now an algorithm for the elimination of quantifiers in the field of real numbers is implemented in the program Mathematica. This offers a non-traditional view on this classical theme. (Contains 1 table.)
NASA Technical Reports Server (NTRS)
Magnus, Alfred E.; Epton, Michael A.
1981-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.
A numerical method for computing unsteady 2-D boundary layer flows
NASA Technical Reports Server (NTRS)
Krainer, Andreas
1988-01-01
A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Precise satellite orbit determination with particular application to ERS-1
NASA Astrophysics Data System (ADS)
Fernandes, Maria Joana Afonso Pereira
The motivation behind this study is twofold. First to assess the accuracy of ERS-1 long arc ephemerides using state of the art models. Second, to develop improved methods for determining precise ERS-1 orbits using either short or long arc techniques. The SATAN programs, for the computation of satellite orbits using laser data were used. Several facilities were added to the original programs: the processing of PRARE range and altimeter data, and a number of algorithms that allow more flexible solutions by adjusting a number of additional parameters. The first part of this study, before the launch of ERS-1, was done with SEAS AT data. The accuracy of SEASAT orbits computed with PRARE simulated data has been determined. The effect of temporal distribution of tracking data along the arc and the extent to which altimetry can replace range data have been investigated. The second part starts with the computation of ERS-1 long arc solutions using laser data. Some aspects of modelling the two main forces affecting ERS-l's orbit are investigated. With regard to the gravitational forces, the adjustment of a set of geopotential coefficients has been considered. With respect to atmospheric drag, extensive research has been carried out on determining the influence on orbit accuracy of the measurements of solar fluxes (P10.7 indices) and geomagnetic activity (Kp indices) used by the atmospheric model in the computation of atmospheric density at satellite height. Two new short arc methods have been developed: the Constrained and the Bayesian method. Both methods are dynamic and consist of solving for the 6 osculating elements. Using different techniques, both methods overcome the problem of normal matrix ill- conditioning by constraining the solution. The accuracy and applicability of these methods are discussed and compared with the traditional non-dynamic TAR method.
A Fast Hyperspectral Vector Radiative Transfer Model in UV to IR spectral bands
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; Sun, B.; Kattawar, G. W.; Platnick, S. E.; Meyer, K.; Wang, C.
2016-12-01
We develop a fast hyperspectral vector radiative transfer model with a spectral range from UV to IR with 5 nm resolutions. This model can simulate top of the atmosphere (TOA) diffuse radiance and polarized reflectance by considering gas absorption, Rayleigh scattering, and aerosol and cloud scattering. The absorption component considers several major atmospheric absorbers such as water vapor, CO2, O3, and O2 including both line and continuum absorptions. A regression-based method is used to parameterize the layer effective optical thickness for each gas, which substantially increases the computation efficiency for absorption while maintaining high accuracy. This method is over 500 times faster than the existing line-by-line method. The scattering component uses the successive order of scattering (SOS) method. For Rayleigh scattering, convergence is fast due to the small optical thickness of atmospheric gases. For cloud and aerosol layers, a small-angle approximation method is used in SOS calculations. The scattering process is divided into two parts, a forward part and a diffuse part. The scattering in the small-angle range in the forward direction is approximated as forward scattering. A cloud or aerosol layer is divided into thin layers. As the ray propagates through each thin layer, a portion diverges as diffuse radiation, while the remainder continues propagating in forward direction. The computed diffuse radiance is the sum of all of the diffuse parts. The small-angle approximation makes the SOS calculation converge rapidly even in a thick cloud layer.
Nonlinear dynamics as an engine of computation.
Kia, Behnam; Lindner, John F; Ditto, William L
2017-03-06
Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics-cybernetical physics-opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
Nonlinear dynamics as an engine of computation
Lindner, John F.; Ditto, William L.
2017-01-01
Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics—cybernetical physics—opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation. This article is part of the themed issue ‘Horizons of cybernetical physics’. PMID:28115619
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Tveito, Aslak; Skavhaug, Ola; Lines, Glenn T; Artebrant, Robert
2011-08-01
Instabilities in the electro-chemical resting state of the heart can generate ectopic waves that in turn can initiate arrhythmias. We derive methods for computing the resting state for mathematical models of the electro-chemical process underpinning a heartbeat, and we estimate the stability of the resting state by invoking the largest real part of the eigenvalues of a linearized model. The implementation of the methods is described and a number of numerical experiments illustrate the feasibility of the methods. In particular, we test the methods for problems where we can compare the solutions with analytical results, and problems where we have solutions computed by independent software. The software is also tested for a fairly realistic 3D model. Copyright © 2011 Elsevier Ltd. All rights reserved.
Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.
2011-01-01
Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.
Fast algorithms for computing phylogenetic divergence time.
Crosby, Ralph W; Williams, Tiffani L
2017-12-06
The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
49 CFR 395.16 - Electronic on-board recording devices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... transfer through wired and wireless methods to portable computers used by roadside safety assurance... the results of power-on self-tests and diagnostic error codes. (e) Date and time. (1) The date and... part. Wireless communication information interchange methods must comply with the requirements of the...
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
An interactive wire-wrap board layout program
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A.
1987-01-01
An interactive computer-graphics-based tool for specifying the placement of electronic parts on a wire-wrap circuit board is presented. Input is a data file (currently produced by a commercial logic design system) which describes the parts used and their interconnections. Output includes printed reports describing the parts and wire paths, parts counts, placement lists, board drawing, and a tape to send to the wire-wrap vendor. The program should reduce the engineer's layout time by a factor of 3 to 5 as compared to manual methods.
NASA Astrophysics Data System (ADS)
Alifanov, O. M.; Budnik, S. A.; Mikhaylov, V. V.; Nenarokomov, A. V.; Titov, D. M.; Yudin, V. M.
2007-06-01
An experimental-computational system, which is developed at the Thermal Laboratory, Department Space Systems Engineering, Moscow Aviation Institute (MAI), is presented for investigating the thermal properties of composite materials by methods of inverse heat transfer problems. The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium. The paper considers the hardware components of the system, including the experiment facility and the automated system of control, measurement, data acquisition and processing, as well as the aspects of methodical support of thermal tests. In the next part the conception and realization of a computer code for experimental data processing to estimate the thermal properties of thermal-insulating materials is given. The most promising direction in further development of methods for non-destructive composite materials using the procedure of solving inverse problems is the simultaneous determination of a combination of their thermal and radiation properties. The general method of iterative regularization is concerned with application to the estimation of materials properties (e.g., example: thermal conductivity λ(T) and heat capacity C(T)). Such problems are of great practical importance in the study of material properties used as non-destructive surface shield in objects of space engineering, power engineering, etc. In the third part the results of practical implementation of hardware and software presented in the previous two parts are given for the estimating of thermal properties of thermal-insulating materials. The main purpose of this study is to confirm the feasibility and effectiveness of the methods developed and hardware equipment for determining thermal properties of particular modern high porous materials.
A hybrid method for the computation of quasi-3D seismograms.
NASA Astrophysics Data System (ADS)
Masson, Yder; Romanowicz, Barbara
2013-04-01
The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these Green's functions are computed using 2D SEM simulation in a 1D Earth model. Such seismograms account for the 3D structure inside the region of interest in a quasi-exact manner. Later we plan to extrapolate the misfit function computed from such seismograms at the stations back into the SEM region in order to compute local adjoint kernels. This opens a new path toward regional adjoint tomography into the deep Earth. Capdeville, Y., et al. (2002). "Coupling the spectral element method with a modal solution for elastic wave propagation in global Earth models." Geophysical Journal International 152(1): 34-67. Lekic, V. and B. Romanowicz (2011). "Inferring upper-mantle structure by full waveform tomography with the spectral element method." Geophysical Journal International 185(2): 799-831. Nissen-Meyer, T., et al. (2007). "A two-dimensional spectral-element method for computing spherical-earth seismograms-I. Moment-tensor source." Geophysical Journal International 168(3): 1067-1092. Robertsson, J. O. A. and C. H. Chapman (2000). "An efficient method for calculating finite-difference seismograms after model alterations." Geophysics 65(3): 907-918. Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.
ERIC Educational Resources Information Center
Fox, Janna; Cheng, Liying
2015-01-01
In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding any...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding any...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding any...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding any...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding any...
Computer graphics and cultural heritage, part 2: continuing inspiration for future tools.
Arnold, David
2014-01-01
The availability of large quantities of cultural-heritage data will enable new, previously inconceivable, types of analysis and new applications. Currently, most emerging analysis methods are experimental research. It's likely to take many years before the research matures and provides cultural-heritage professionals with novel research methods that they use routinely. Indeed, we can expect further disruptive technologies to emerge in the foreseeable future and a "steady state" of continuing rapid change. Part 1 can be found at 10.1109/MCG.2014.47.
Evaluation of Three Microcomputer Teaching Modules. SUMIT Courseware Development Project.
ERIC Educational Resources Information Center
Soldan, Ted
The purpose of this series of experiments was to examine two questions related to the effectiveness of computer assisted instruction (CAI). Can microcomputer modules teach effectively, and do they enhance learning when used as a supplement to traditional teaching methods? Part 1 of this report addresses the former question and part 2 addresses the…
High-kVp Assisted Metal Artifact Reduction for X-ray Computed Tomography
Xi, Yan; Jin, Yannan; De Man, Bruno; Wang, Ge
2016-01-01
In X-ray computed tomography (CT), the presence of metallic parts in patients causes serious artifacts and degrades image quality. Many algorithms were published for metal artifact reduction (MAR) over the past decades with various degrees of success but without a perfect solution. Some MAR algorithms are based on the assumption that metal artifacts are due only to strong beam hardening and may fail in the case of serious photon starvation. Iterative methods handle photon starvation by discarding or underweighting corrupted data, but the results are not always stable and they come with high computational cost. In this paper, we propose a high-kVp-assisted CT scan mode combining a standard CT scan with a few projection views at a high-kVp value to obtain critical projection information near the metal parts. This method only requires minor hardware modifications on a modern CT scanner. Two MAR algorithms are proposed: dual-energy normalized MAR (DNMAR) and high-energy embedded MAR (HEMAR), aiming at situations without and with photon starvation respectively. Simulation results obtained with the CT simulator CatSim demonstrate that the proposed DNMAR and HEMAR methods can eliminate metal artifacts effectively. PMID:27891293
Madanat, Rami; Moritz, Niko; Aro, Hannu T
2007-01-01
Physical phantom models have conventionally been used to determine the accuracy and precision of radiostereometric analysis (RSA) in various orthopaedic applications. Using a phantom model of a fracture of the distal radius it has previously been shown that RSA is a highly accurate and precise method for measuring both translation and rotation in three-dimensions (3-D). The main shortcoming of a physical phantom model is its inability to mimic complex 3-D motion. The goal of this study was to create a realistic computer model for preoperative planning of RSA studies and to test the accuracy of RSA in measuring complex movements in fractures of the distal radius using this new model. The 3-D computer model was created from a set of tomographic scans. The simulation of the radiographic imaging was performed using ray-tracing software (POV-Ray). RSA measurements were performed according to standard protocol. Using a two-part fracture model (AO/ASIF type A2), it was found that for simple movements in one axis, translations in the range of 25microm-2mm could be measured with an accuracy of +/-2microm. Rotations ranging from 16 degrees to 2 degrees could be measured with an accuracy of +/-0.015 degrees . Using a three-part fracture model the corresponding values of accuracy were found to be +/-4microm and +/-0.031 degrees for translation and rotation, respectively. For complex 3-D motion in a three-part fracture model (AO/ASIF type C1) the accuracy was +/-6microm for translation and +/-0.120 degrees for rotation. The use of 3-D computer modelling can provide a method for preoperative planning of RSA studies in complex fractures of the distal radius and in other clinical situations in which the RSA method is applicable.
THE COMPREHENSION OF RAPID SPEECH BY THE BLIND, PART III.
ERIC Educational Resources Information Center
FOULKE, EMERSON
A REVIEW OF THE RESEARCH ON THE COMPREHENSION OF RAPID SPEECH BY THE BLIND IDENTIFIES FIVE METHODS OF SPEECH COMPRESSION--SPEECH CHANGING, ELECTROMECHANICAL SAMPLING, COMPUTER SAMPLING, SPEECH SYNTHESIS, AND FREQUENCY DIVIDING WITH THE HARMONIC COMPRESSOR. THE SPEECH CHANGING AND ELECTROMECHANICAL SAMPLING METHODS AND THE NECESSARY APPARATUS HAVE…
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
A Computational Approach to Qualitative Analysis in Large Textual Datasets
Evans, Michael S.
2014-01-01
In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Examination and notes to the astronomical records in >SUISHU<.
NASA Astrophysics Data System (ADS)
Liu, Ciyuan
1996-06-01
Astronomical records are an important part in Chinese official historical books. Their main purpose was for astrology and they are an obstacle for historians who read those books. With modern astronomical methods, one can compute and examine most of those ancient records. By comparing the computed results with the original texts, one can examine the texts, find their mistakes, study their observation method and regulation, inspect astrological theory, take a deeper understanding to those important historical materials. As an example the author deals with the astronomcial records of Dynasties Liang and Chen for 60 years in >SUISHU<, the official history of Dynasty Sui. He also synthesized other historical sources in addition to the astronomical computation.
Progress in Computational Electron-Molecule Collisions
NASA Astrophysics Data System (ADS)
Rescigno, Tn
1997-10-01
The past few years have witnessed tremendous progress in the development of sophisticated ab initio methods for treating collisions of slow electrons with isolated small molecules. Researchers in this area have benefited greatly from advances in computer technology; indeed, the advent of parallel computers has made it possible to carry out calculations at a level of sophistication inconceivable a decade ago. But bigger and faster computers are only part of the picture. Even with today's computers, the practical need to study electron collisions with the kinds of complex molecules and fragments encountered in real-world plasma processing environments is taxing present methods beyond their current capabilities. Since extrapolation of existing methods to handle increasingly larger targets will ultimately fail as it would require computational resources beyond any imagined, continued progress must also be linked to new theoretical developments. Some of the techniques recently introduced to address these problems will be discussed and illustrated with examples of electron-molecule collision calculations we have carried out on some fairly complex target gases encountered in processing plasmas. Electron-molecule scattering continues to pose many formidable theoretical and computational challenges. I will touch on some of the outstanding open questions.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
NASA Technical Reports Server (NTRS)
Gunness, R. C., Jr.; Knight, C. J.; Dsylva, E.
1972-01-01
The unified small disturbance equations are numerically solved using the well-known Lax-Wendroff finite difference technique. The method allows complete determination of the inviscid flow field and surface properties as long as the flow remains supersonic. Shock waves and other discontinuities are accounted for implicity in the numerical method. This technique was programed for general application to the three-dimensional case. The validity of the method is demonstrated by calculations on cones, axisymmetric bodies, lifting bodies, delta wings, and a conical wing/body combination. Part 1 contains the discussion of problem development and results of the study. Part 2 contains flow charts, subroutine descriptions, and a listing of the computer program.
Fast focus estimation using frequency analysis in digital holography.
Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung
2014-11-17
A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Cut set-based risk and reliability analysis for arbitrarily interconnected networks
Wyss, Gregory D.
2000-01-01
Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.
Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L
2016-02-01
Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.
Method and system rapid piece handling
Spletzer, Barry L.
1996-01-01
The advent of high-speed fabric cutters has made necessary the development of automated techniques for the collection and sorting of garment pieces into collated piles of pieces ready for assembly. The present invention enables a new method for such handling and sorting of garment parts, and to apparatus capable of carrying out this new method. The common thread is the application of computer-controlled shuttling bins, capable of picking up a desired piece of fabric and dropping it in collated order for assembly. Such apparatus with appropriate computer control relieves the bottleneck now presented by the sorting and collation procedure, thus greatly increasing the overall rate at which garments can be assembled.
The present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Pope, S. B.
1992-01-01
The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.
A Review of Methods for Analysis of the Expected Value of Information.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2017-10-01
In recent years, value-of-information analysis has become more widespread in health economic evaluations, specifically as a tool to guide further research and perform probabilistic sensitivity analysis. This is partly due to methodological advancements allowing for the fast computation of a typical summary known as the expected value of partial perfect information (EVPPI). A recent review discussed some approximation methods for calculating the EVPPI, but as the research has been active over the intervening years, that review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present two case studies in order to compare the estimation performance of these new methods. We conclude that a method based on nonparametric regression offers the best method for calculating the EVPPI in terms of accuracy, computational time, and ease of implementation. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with R functions and a web app to aid practitioners.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.
NASA Technical Reports Server (NTRS)
Sword, A. J.; Park, W. T.
1975-01-01
A teleoperator system with a computer for manipulator control to combine the capabilities of both man and computer to accomplish a task is described. This system allows objects in unpredictable locations to be successfully located and acquired. By using a method of characterizing the work-space together with man's ability to plan a strategy and coarsely locate an object, the computer is provided with enough information to complete the tedious part of the task. In addition, the use of voice control is shown to be a useful component of the man/machine interface.
Discussion of "Computational Electrocardiography: Revisiting Holter ECG Monitoring".
Baumgartner, Christian; Caiani, Enrico G; Dickhaus, Hartmut; Kulikowski, Casimir A; Schiecke, Karin; van Bemmel, Jan H; Witte, Herbert
2016-08-05
This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper "Computational Electrocardiography: Revisiting Holter ECG Monitoring" written by Thomas M. Deserno and Nikolaus Marx. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the paper of Deserno and Marx. In subsequent issues the discussion can continue through letters to the editor.
Fundamentals of digital filtering with applications in geophysical prospecting for oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesko, A.
This book is a comprehensive work bringing together the important mathematical foundations and computing techniques for numerical filtering methods. The first two parts of the book introduce the techniques, fundamental theory and applications, while the third part treats specific applications in geophysical prospecting. Discussion is limited to linear filters, but takes in related fields such as correlational and spectral analysis.
Numerical study of the flow in a three-dimensional thermally driven cavity
NASA Astrophysics Data System (ADS)
Rauwoens, Pieter; Vierendeels, Jan; Merci, Bart
2008-06-01
Solutions for the fully compressible Navier-Stokes equations are presented for the flow and temperature fields in a cubic cavity with large horizontal temperature differences. The ideal-gas approximation for air is assumed and viscosity is computed using Sutherland's law. The three-dimensional case forms an extension of previous studies performed on a two-dimensional square cavity. The influence of imposed boundary conditions in the third dimension is investigated as a numerical experiment. Comparison is made between convergence rates in case of periodic and free-slip boundary conditions. Results with no-slip boundary conditions are presented as well. The effect of the Rayleigh number is studied. Results are computed using a finite volume method on a structured, collocated grid. An explicit third-order discretization for the convective part and an implicit central discretization for the acoustic part and for the diffusive part are used. To stabilize the scheme an artificial dissipation term for the pressure and the temperature is introduced. The discrete equations are solved using a time-marching method with restrictions on the timestep corresponding to the explicit parts of the solver. Multigrid is used as acceleration technique.
Proposed algorithm to improve job shop production scheduling using ant colony optimization method
NASA Astrophysics Data System (ADS)
Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari
2017-12-01
This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Nurses' computer literacy and attitudes towards the use of computers in health care.
Gürdaş Topkaya, Sati; Kaya, Nurten
2015-05-01
This descriptive and cross-sectional study was designed to address nurses' computer literacy and attitudes towards the use of computers in health care and to determine the correlation between these two variables. This study was conducted with the participation of 688 nurses who worked at two university-affiliated hospitals. These nurses were chosen using a stratified random sampling method. The data were collected using the Multicomponent Assessment of Computer Literacy and the Pretest for Attitudes Towards Computers in Healthcare Assessment Scale v. 2. The nurses, in general, had positive attitudes towards computers, and their computer literacy was good. Computer literacy in general had significant positive correlations with individual elements of computer competency and with attitudes towards computers. If the computer is to be an effective and beneficial part of the health-care system, it is necessary to help nurses improve their computer competency. © 2014 Wiley Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Hannachi, Ammar; Kohler, Sophie; Lallement, Alex; Hirsch, Ernest
2015-04-01
3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.
A transient performance method for CO2 removal with regenerable adsorbents
NASA Technical Reports Server (NTRS)
Hwang, K. C.
1972-01-01
A computer program is described which can be used to predict the transient performance of vacuum-desorbed sorbent beds for CO2 or water removal, and composite beds of two sorbents for simultaneous humidity control and CO2 removal. The program was written primarily for silica gel and molecular sieve inorganic sorbents, but can be used for a variety of adsorbent materials. Part 2 of this report describes a computer program which can be used to predict performance for multiple-bed CO2-removal sorbent systems. This program is an expanded version of the composite sorbent bed program described in Part 1.
Modeling and reduction with applications to semiconductor processing
NASA Astrophysics Data System (ADS)
Newman, Andrew Joseph
This thesis consists of several somewhat distinct but connected parts, with an underlying motivation in problems pertaining to control and optimization of semiconductor processing. The first part (Chapters 3 and 4) addresses problems in model reduction for nonlinear state-space control systems. In 1993, Scherpen generalized the balanced truncation method to the nonlinear setting. However, the Scherpen procedure is not easily computable and has not yet been applied in practice. We offer a method for computing a working approximation to the controllability energy function, one of the main objects involved in the method. Moreover, we show that for a class of second-order mechanical systems with dissipation, under certain conditions related to the dissipation, an exact formula for the controllability function can be derived. We then present an algorithm for a numerical implementation of the Morse-Palais lemma, which produces a local coordinate transformation under which a real-valued function with a non-degenerate critical point is quadratic on a neighborhood of the critical point. Application of the algorithm to the controllabilty function plays a key role in computing the balanced representation. We then apply our methods and algorithms to derive balanced realizations for nonlinear state-space models of two example mechanical systems: a simple pendulum and a double pendulum. The second part (Chapter 5) deals with modeling of rapid thermal chemical vapor deposition (RTCVD) for growth of silicon thin films, via first-principles and empirical analysis. We develop detailed process-equipment models and study the factors that influence deposition uniformity, such as temperature, pressure, and precursor gas flow rates, through analysis of experimental and simulation results. We demonstrate that temperature uniformity does not guarantee deposition thickness uniformity in a particular commercial RTCVD reactor of interest. In the third part (Chapter 6) we continue the modeling effort, specializing to a control system for RTCVD heat transfer. We then develop and apply ad-hoc versions of prominent model reduction approaches to derive reduced models and perform a comparative study.
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
40 CFR 63.772 - Test methods, compliance procedures, and compliance demonstrations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Oil and Natural Gas Production Facilities § 63.772 Test methods, compliance procedures, and compliance...) A mixture of methane in air at a concentration less than 10,000 parts per million by volume. (5) An... methane and ethane) or total HAP (Ei, Eo) shall be computed using the equations and procedures specified...
40 CFR 63.772 - Test methods, compliance procedures, and compliance demonstrations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Oil and Natural Gas Production Facilities § 63.772 Test methods, compliance procedures, and compliance...) A mixture of methane in air at a concentration less than 10,000 parts per million by volume. (5) An... methane and ethane) or total HAP (Ei, Eo) shall be computed using the equations and procedures specified...
SUPIN: A Computational Tool for Supersonic Inlet Design
NASA Technical Reports Server (NTRS)
Slater, John W.
2016-01-01
A computational tool named SUPIN is being developed to design and analyze the aerodynamic performance of supersonic inlets. The inlet types available include the axisymmetric pitot, three-dimensional pitot, axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flow-field is divided into parts to provide a framework for the geometry and aerodynamic modeling. Each part of the inlet is defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick design and analysis. SUPIN provides inlet geometry in the form of coordinates, surface angles, and cross-sectional areas. SUPIN can generate inlet surface grids and three-dimensional, structured volume grids for use with higher-fidelity computational fluid dynamics (CFD) analysis. Capabilities highlighted in this paper include the design and analysis of streamline-traced external-compression inlets, modeling of porous bleed, and the design and analysis of mixed-compression inlets. CFD analyses are used to verify the SUPIN results.
EMRlog method for computer security for electronic medical records with logic and data mining.
Martínez Monterrubio, Sergio Mauricio; Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.
EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining
Frausto Solis, Juan; Monroy Borja, Raúl
2015-01-01
The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system. PMID:26495300
Ahmed, N; Zheng, Ziyi; Mueller, K
2012-12-01
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
Schwenke, M; Hennemuth, A; Fischer, B; Friman, O
2012-01-01
Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.
A new graph-based method for pairwise global network alignment
Klau, Gunnar W
2009-01-01
Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162
Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft
NASA Astrophysics Data System (ADS)
Tsuchiya, Takeshi
This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.
Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs
NASA Technical Reports Server (NTRS)
Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan
2014-01-01
Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.
Computation of Temperature-Dependent Legendre Moments of a Double-Differential Elastic Cross Section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbanas, Goran; Dunn, Michael E; Larson, Nancy M
2011-01-01
A general expression for temperature-dependent Legendre moments of a double-differential elastic scattering cross section was derived by Ouisloumen and Sanchez [Nucl. Sci. Eng. 107, 189-200 (1991)]. Attempts to compute this expression are hindered by the three-fold nested integral, limiting their practical application to just the zeroth Legendre moment of an isotropic scattering. It is shown that the two innermost integrals could be evaluated analytically to all orders of Legendre moments, and for anisotropic scattering, by a recursive application of the integration by parts method. For this method to work, the anisotropic angular distribution in the center of mass is expressedmore » as an expansion in Legendre polynomials. The first several Legendre moments of elastic scattering of neutrons on U-238 are computed at T=1000 K at incoming energy 6.5 eV for isotropic scattering in the center of mass frame. Legendre moments of the anisotropic angular distribution given via Blatt-Biedenharn coefficients are computed at ~1 keV. The results are in agreement with those computed by the Monte Carlo method.« less
Application of multi-grid method on the simulation of incremental forging processes
NASA Astrophysics Data System (ADS)
Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel
2016-10-01
Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
TinkerCell: modular CAD tool for synthetic biology.
Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M
2009-10-29
Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks. An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at http://www.tinkercell.com. An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily accept third-party algorithms, allowing it to serve as a platform for testing different methods relevant to synthetic biology.
TinkerCell: modular CAD tool for synthetic biology
Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M
2009-01-01
Background Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks. Results An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at . Conclusion An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily accept third-party algorithms, allowing it to serve as a platform for testing different methods relevant to synthetic biology. PMID:19874625
[Online therapies - what is known about their functionality].
Stenberg, Jan-Henry; Joutsenniemi, Kaisla; Holi, Matti
2015-01-01
Online therapies are partly automated therapies, in which psychotherapeutic contents have been complemented with computer-aided presentational and educational contents, with a therapist giving support to the progress of the patient. As methods, these therapeutic programs incorporate therapeutic methods that have proven effective, such as remodeling of thoughts, activation of behavior and exposure, empathy, strengthening of cooperative relationship and motivation, and general support for self-reflection. For instance, online therapies already constitute part of the Finnish treatment guidelines on depression. Online therapies are available throughout Finland for the essential psychiatric illnesses.
Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin
2017-01-02
In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less
The development of a revised version of multi-center molecular Ornstein-Zernike equation
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi
2012-04-01
Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.
Shamir, Lior; Yerby, Carol; Simpson, Robert; von Benda-Beckmann, Alexander M; Tyack, Peter; Samarra, Filipa; Miller, Patrick; Wallin, John
2014-02-01
Vocal communication is a primary communication method of killer and pilot whales, and is used for transmitting a broad range of messages and information for short and long distance. The large variation in call types of these species makes it challenging to categorize them. In this study, sounds recorded by audio sensors carried by ten killer whales and eight pilot whales close to the coasts of Norway, Iceland, and the Bahamas were analyzed using computer methods and citizen scientists as part of the Whale FM project. Results show that the computer analysis automatically separated the killer whales into Icelandic and Norwegian whales, and the pilot whales were separated into Norwegian long-finned and Bahamas short-finned pilot whales, showing that at least some whales from these two locations have different acoustic repertoires that can be sensed by the computer analysis. The citizen science analysis was also able to separate the whales to locations by their sounds, but the separation was somewhat less accurate compared to the computer method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpentier, J.L.; Di Bono, P.J.; Tournebise, P.J.
The efficient bounding method for DC contingency analysis is improved using reciprocity properties. Knowing the consequences of the outage of a branch, these properties provide the consequences on that branch of various kinds of outages. This is used in order to reduce computation times and to get rid of some difficulties, such as those occurring when a branch flow is close to its limit before outage. Compensation, sparse vector, sparse inverse and bounding techniques are also used. A program has been implemented for single branch outages and tested on actual French EHV 650 bus network. Computation times are 60% ofmore » the Efficient Bounding method. The relevant algorithm is described in detail in the first part of this paper. In the second part, reciprocity properties and bounding formulas are extended for multiple branch outages and for multiple generator or load outages. An algorithm is proposed in order to handle all these cases simultaneously.« less
2D modeling of direct laser metal deposition process using a finite particle method
NASA Astrophysics Data System (ADS)
Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.
2018-05-01
Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
User's manual for the BNW-II optimization code for dry/wet-cooled power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Bamberger, J.A.; Braun, D.J.
1978-05-01
The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code ismore » a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.« less
Miller, Kirk A.; Mason, John P.
2000-01-01
The water-surface profile and flood boundaries for the computed 100-year flood were determined for a part of the lower Salt River in Lincoln County, Wyoming. Channel cross-section data were provided by Lincoln County. Cross-section data for bridges and other structures were collected and compiled by the U.S. Geological Survey. Roughness coefficients ranged from 0.034 to 0.100. The 100-year flood was computed using standard methods, ranged from 5,170 to 4,120 cubic feet per second through the study reach, and was adjusted proportional to contributing drainage area. Water-surface elevations were determined by the standard step-backwater method. Flood boundaries were plotted on digital basemaps.
[Research and application of computer-aided technology in restoration of maxillary defect].
Cheng, Xiaosheng; Liao, Wenhe; Hu, Qingang; Wang, Qian; Dai, Ning
2008-08-01
This paper presents a new method of designing restoration model of maxillectomy defect through Computer aided technology. Firstly, 3D maxillectomy triangle mesh model is constructed from Helical CT data. Secondly, the triangle mesh model is transformed into initial computer-aided design (CAD) model of maxillectomy through reverse engineering software. Thirdly, the 3D virtual restoration model of maxillary defect is obtained after designing and adjusting the initial CAD model through CAD software according to the patient's practical condition. Therefore, the 3D virtual restoration can be fitted very well with the broken part of maxilla. The exported design data can be manufactured using rapid prototyping technology and foundry technology. Finally, the result proved that this method is effective and feasible.
Computational problems and signal processing in SETI
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard
1991-01-01
The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.
A method of evaluating crown fuels in forest stands.
Rodney W. Sando; Charles H. Wick
1972-01-01
A method of describing the crown fuels in a forest fuel complex based on crown weight and crown volume was developed. A computer program is an integral part of the method. Crown weight data are presented in graphical form and are separated into hardwood and coniferous fuels. The fuel complex is described using total crown weight per acre, mean height to the base of...
NASA Astrophysics Data System (ADS)
Viana, Ilisio; Orteu, Jean-José; Cornille, Nicolas; Bugarin, Florian
2015-11-01
We focus on quality control of mechanical parts in aeronautical context using a single pan-tilt-zoom (PTZ) camera and a computer-aided design (CAD) model of the mechanical part. We use the CAD model to create a theoretical image of the element to be checked, which is further matched with the sensed image of the element to be inspected, using a graph theory-based approach. The matching is carried out in two stages. First, the two images are used to create two attributed graphs representing the primitives (ellipses and line segments) in the images. In the second stage, the graphs are matched using a similarity function built from the primitive parameters. The similarity scores of the matching are injected in the edges of a bipartite graph. A best-match-search procedure in the bipartite graph guarantees the uniqueness of the match solution. The method achieves promising performance in tests with synthetic data including missing elements, displaced elements, size changes, and combinations of these cases. The results open good prospects for using the method with realistic data.
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
NASA Technical Reports Server (NTRS)
Seidman, T. I.; Munteanu, M. J.
1979-01-01
The relationships of a variety of general computational methods (and variances) for treating illposed problems such as geophysical inverse problems are considered. Differences in approach and interpretation based on varying assumptions as to, e.g., the nature of measurement uncertainties are discussed along with the factors to be considered in selecting an approach. The reliability of the results of such computation is addressed.
ERIC Educational Resources Information Center
Gurevich, Irina; Gurev, Dvora
2012-01-01
In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
12 CFR Appendix G to Part 226 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Open-End Model Forms and Clauses G Appendix G... RESERVE SYSTEM (CONTINUED) TRUTH IN LENDING (REGULATION Z) Pt. 226, App. G Appendix G to Part 226—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 226.6 and...
The Potential of Computer Controlled Optimizing Equipment in the Wooden Furniture Industry
R. Edward Thomas; Urs Buehlmann; Urs Buehlmann
2003-01-01
The goal of the wooden furniture industry is to convert lumber into parts by using the most efficient and cost effective processing methods. The key steps in processing lumber arc removing the regions that contain unacceptable defects or character marks and cutting the remaining areas to the widths and lengths of needed parts. Such equipment has been used in furniture...
A Virtual Learning Environment for Part-Time MASW Students: An Evaluation of the WebCT
ERIC Educational Resources Information Center
Chan, Charles C.; Tsui, Ming-sum; Chan, Mandy Y. C.; Hong, Joe H.
2008-01-01
This study aims to evaluate the perception of a cohort of social workers studying for a part-time master's program in social work in using the popular Web-based learning platform--World Wide Web Course Tools (WebCT) as a complimentary method of teaching and learning. It was noted that social work profession began incorporating computer technology…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dulat, Falko; Lionetti, Simone; Mistlberger, Bernhard
We present an analytic computation of the Higgs production cross section in the gluon fusion channel, which is differential in the components of the Higgs momentum and inclusive in the associated partonic radiation through NNLO in perturbative QCD. Our computation includes the necessary higher order terms in the dimensional regulator beyond the finite part that are required for renormalisation and collinear factorisation at N 3LO. We outline in detail the computational methods which we employ. We present numerical predictions for realistic final state observables, specifically distributions for the decay products of the Higgs boson in the γγ decay channel.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2013-07-01
We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.
In silico pharmacology for drug discovery: applications to targets and beyond
Ekins, S; Mestres, J; Testa, B
2007-01-01
Computational (in silico) methods have been developed and widely applied to pharmacology hypothesis development and testing. These in silico methods include databases, quantitative structure-activity relationships, similarity searching, pharmacophores, homology models and other molecular modeling, machine learning, data mining, network analysis tools and data analysis tools that use a computer. Such methods have seen frequent use in the discovery and optimization of novel molecules with affinity to a target, the clarification of absorption, distribution, metabolism, excretion and toxicity properties as well as physicochemical characterization. The first part of this review discussed the methods that have been used for virtual ligand and target-based screening and profiling to predict biological activity. The aim of this second part of the review is to illustrate some of the varied applications of in silico methods for pharmacology in terms of the targets addressed. We will also discuss some of the advantages and disadvantages of in silico methods with respect to in vitro and in vivo methods for pharmacology research. Our conclusion is that the in silico pharmacology paradigm is ongoing and presents a rich array of opportunities that will assist in expediating the discovery of new targets, and ultimately lead to compounds with predicted biological activity for these novel targets. PMID:17549046
QMC Goes BOINC: Using Public Resource Computing to Perform Quantum Monte Carlo Calculations
NASA Astrophysics Data System (ADS)
Rainey, Cameron; Engelhardt, Larry; Schröder, Christian; Hilbig, Thomas
2008-10-01
Theoretical modeling of magnetic molecules traditionally involves the diagonalization of quantum Hamiltonian matrices. However, as the complexity of these molecules increases, the matrices become so large that this process becomes unusable. An additional challenge to this modeling is that many repetitive calculations must be performed, further increasing the need for computing power. Both of these obstacles can be overcome by using a quantum Monte Carlo (QMC) method and a distributed computing project. We have recently implemented a QMC method within the Spinhenge@home project, which is a Public Resource Computing (PRC) project where private citizens allow part-time usage of their PCs for scientific computing. The use of PRC for scientific computing will be described in detail, as well as how you can contribute to the project. See, e.g., L. Engelhardt, et. al., Angew. Chem. Int. Ed. 47, 924 (2008). C. Schröoder, in Distributed & Grid Computing - Science Made Transparent for Everyone. Principles, Applications and Supporting Communities. (Weber, M.H.W., ed., 2008). Project URL: http://spin.fh-bielefeld.de
Computer simulation of position and maximum of linear polarization of asteroids
NASA Astrophysics Data System (ADS)
Petrov, Dmitry; Kiselev, Nikolai
2018-01-01
The ground-based observations of near-Earth asteroids at large phase angles have shown some feature: the linear polarization maximum position of the high-albedo E-type asteroids shifted markedly towards smaller phase angles (αmax ≈ 70°) with respect to that for the moderate-albedo S-type asteroids (αmax ≈ 110°), weakly depending on the wavelength. To study this phenomenon, the theoretical approach and the modified T-matrix method (the so-called Sh-matrices method) were used. Theoretical approach was devoted to finding the values of αmax, corresponding to maximal values of positive polarization Pmax. Computer simulations were performed for an ensemble of random Gaussian particles, whose scattering properties were averaged over with different particle orientations and size parameters in the range X = 2.0 ... 21.0, with the power law distribution X - k, where k = 3.6. The real parts of the refractive index mr were 1.5, 1.6 and 1.7. Imaginary part of refractive index varied from mi = 0.0 to mi = 0.5. Both theoretical approach and computer simulation showed that the value of αmax strongly depends on the refractive index. The increase of mi leads to increased αmax and Pmax. In addition, computer simulation shows that the increase of the real part of the refractive index reduces Pmax. Whereas E-type high-albedo asteroids have smaller values of mi, than S -type asteroids, we can conclude, that value of αmax of E-type asteroids should be smaller than for S -type ones. This is in qualitative agreement with the observed effect in asteroids.
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
This paper, the second of a two-part series, introduces undergraduate students to ocean wave forecasting using interactive computer-generated visualization and animation. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Fortunately, the introduction of computers in the geosciences provides a tool for addressing this problem. Computer-generated visualization and animation, accompanied by oral explanation, have been shown to be a pedagogical improvement to more traditional methods of instruction. Cartographic science and other disciplines using geographical information systems have been especially aggressive in pioneering the use of visualization and animation, whereas oceanography has not. This paper will focus on the teaching of ocean swell wave forecasting, often considered a difficult oceanographic topic due to the mathematics and physics required, as well as its interdependence on time and space. Several MATLAB ® software programs are described and offered to visualize and animate group speed, frequency dispersion, angular dispersion, propagation, and wave height forecasting of deep water ocean swell waves. Teachers may use these interactive visualizations and animations without requiring an extensive background in computer programming.
Calude, Cristian S; Păun, Gheorghe
2004-11-01
Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.
Determination of discharge during pulsating flow
Thompson, T.H.
1968-01-01
Pulsating flow in an open channel is a manifestation of unstable-flow conditions in which a series of translatory waves of perceptible magnitude develops and moves rapidly downstream. Pulsating flow is a matter of concern in the design and operation of steep-gradient channels. If it should occur at high stages in a channel designed for stable flow, the capacity of the channel may be inadequate at a discharge that is much smaller than that for which the channel was designed. If the overriding translatory wave carries an appreciable part of the total flow, conventional stream-gaging procedures cannot be used to determine the discharge; neither the conventional instrumentation nor conventional methodology is adequate. A method of determining the discharge during pulsating flow was tested in the Santa Anita Wash flood control channel in Arcadia, Calif., April 16, 1965. Observations of the dimensions and velocities of translatory waves were made during a period of controlled reservoir releases of about 100, 200, and 300 cfs (cubic feet per second). The method of computing discharge was based on (1) computation of the discharge in the overriding waves and (2) computation of the discharge in the shallow-depth, or overrun, part of the flow. Satisfactory results were obtained by this method. However, the procedure used-separating the flow into two components and then treating the shallow-depth component as though it were steady--has no theoretical basis. It is simply an expedient for use until laboratory investigation can provide a satisfactory analytical solution to the problem of computing discharge during pulsating flow. Sixteen months prior to the test in Santa Anita Wash, a robot camera had been designed .and programmed to obtain the data needed to compute discharge by the method described above. The photographic equipment had been installed in Haines Creek flood control channel in Los Angeles, Calif., but it had not been completely tested because of the infrequency of flow in that channel. Because the Santa Anita Wash tests afforded excellent data for analysis, further development of the photographic ,technique at Haines Creek was discontinued. Three methods for obtaining the data needed to compute discharge during pulsating flow are proposed. In two of the methods--the photographic method and the depth-recorder method--the dimensions and velocities of translatory waves are recorded, and discharge is then computed by the procedure developed in this report. The third method?the constant-rate-dye-dilution method--yields the discharge more directly. The discharge is computed from the dye-injection rate and the ratio of the concentration of dye in the injected solution to the concentration of dye in the water sampled at a site downstream. The three methods should be developed and tested in ,the Santa Anita Wash flood control channel under controlled conditions similar to those in the test of April 1965.
Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains
Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping
2017-01-01
This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Wilcox, S.
This report was part of a multiyear collaboration with the University of Wisconsin and the National Oceanic and Atmospheric Administration (NOAA) to produce high-quality, satellite-based, solar resource datasets for the United States. High-quality, solar resource assessment accelerates technology deployment by making a positive impact on decision making and reducing uncertainty in investment decisions. Satellite-based solar resource datasets are used as a primary source in solar resource assessment. This is mainly because satellites provide larger areal coverage and longer periods of record than ground-based measurements. With the advent of newer satellites with increased information content and faster computers that can processmore » increasingly higher data volumes, methods that were considered too computationally intensive are now feasible. One class of sophisticated methods for retrieving solar resource information from satellites is a two-step, physics-based method that computes cloud properties and uses the information in a radiative transfer model to compute solar radiation. This method has the advantage of adding additional information as satellites with newer channels come on board. This report evaluates the two-step method developed at NOAA and adapted for solar resource assessment for renewable energy with the goal of identifying areas that can be improved in the future.« less
Automated Measurement of Patient-Specific Tibial Slopes from MRI
Amerinatanzi, Amirhesam; Summers, Rodney K.; Ahmadi, Kaveh; Goel, Vijay K.; Hewett, Timothy E.; Nyman, Edward
2017-01-01
Background: Multi-planar proximal tibial slopes may be associated with increased likelihood of osteoarthritis and anterior cruciate ligament injury, due in part to their role in checking the anterior-posterior stability of the knee. Established methods suffer repeatability limitations and lack computational efficiency for intuitive clinical adoption. The aims of this study were to develop a novel automated approach and to compare the repeatability and computational efficiency of the approach against previously established methods. Methods: Tibial slope geometries were obtained via MRI and measured using an automated Matlab-based approach. Data were compared for repeatability and evaluated for computational efficiency. Results: Mean lateral tibial slope (LTS) for females (7.2°) was greater than for males (1.66°). Mean LTS in the lateral concavity zone was greater for females (7.8° for females, 4.2° for males). Mean medial tibial slope (MTS) for females was greater (9.3° vs. 4.6°). Along the medial concavity zone, female subjects demonstrated greater MTS. Conclusion: The automated method was more repeatable and computationally efficient than previously identified methods and may aid in the clinical assessment of knee injury risk, inform surgical planning, and implant design efforts. PMID:28952547
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
NASA Astrophysics Data System (ADS)
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.
A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.
2017-12-01
A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.
40 CFR 63.772 - Test methods, compliance procedures, and compliance demonstrations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Oil and Natural Gas Production Facilities § 63.772 Test methods, compliance procedures, and compliance...) A mixture of methane in air at a concentration less than 10,000 parts per million by volume. (5) An... rate of either TOC (minus methane and ethane) or total HAP (Ei, Eo) shall be computed using the...
40 CFR 63.772 - Test methods, compliance procedures, and compliance demonstrations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Oil and Natural Gas Production Facilities § 63.772 Test methods, compliance procedures, and compliance...) A mixture of methane in air at a concentration less than 10,000 parts per million by volume. (5) An... rate of either TOC (minus methane and ethane) or total HAP (Ei, Eo) shall be computed using the...
N-person differential games. Part 2: The penalty method
NASA Technical Reports Server (NTRS)
Chen, G.; Mills, W. H.; Zheng, Q.; Shaw, W. H.
1983-01-01
The equilibrium strategy for N-person differential games can be found by studying a min-max problem subject to differential systems constraints. The differential constraints are penalized and finite elements are used to compute numerical solutions. Convergence proof and error estimates are given. Numerical results are also included and compared with those obtained by the dual method.
A fully resolved consensus between fully resolved phylogenetic trees.
Quitzau, José Augusto Amgarten; Meidanis, João
2006-03-31
Nowadays, there are many phylogeny reconstruction methods, each with advantages and disadvantages. We explored the advantages of each method, putting together the common parts of trees constructed by several methods, by means of a consensus computation. A number of phylogenetic consensus methods are already known. Unfortunately, there is also a taboo concerning consensus methods, because most biologists see them mainly as comparators and not as phylogenetic tree constructors. We challenged this taboo by defining a consensus method that builds a fully resolved phylogenetic tree based on the most common parts of fully resolved trees in a given collection. We also generated results showing that this consensus is in a way a kind of "median" of the input trees; as such it can be closer to the correct tree in many situations.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.
Niskanen, Toivo; Lehtelä, Jouni; Länsikallio, Riina
2014-01-01
Employers and workers need concrete guidance to plan and implement changes in the ergonomics of computer workstations. The Näppärä method is a screening tool for identifying problems requiring further assessment and corrective actions. The aim of this study was to assess the work of occupational safety and health (OSH) government inspectors who used Näppärä as part of their OSH enforcement inspections (430 assessments) related to computer work. The modifications in workstation ergonomics involved mainly adjustments to the screen, mouse, keyboard, forearm supports, and chair. One output of the assessment is an index indicating the percentage of compliance items. This method can be considered as exposure assessment and ergonomics intervention used as a benchmark for the level of ergonomics. Future research can examine whether the effectiveness of participatory ergonomics interventions should be investigated with Näppärä.
Aprà, E; Kowalski, K
2016-03-08
In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.
Efficient Computation of Difference Vibrational Spectra in Isothermal-Isobaric Ensemble.
Joutsuka, Tatsuya; Morita, Akihiro
2016-11-03
Difference spectroscopy between two close systems is widely used to augment its selectivity to the different parts of the observed system, though the molecular dynamics calculation of tiny difference spectra would be computationally extraordinary demanding by subtraction of two spectra. Therefore, we have proposed an efficient computational algorithm of difference spectra without resorting to the subtraction. The present paper reports our extension of the theoretical method in the isothermal-isobaric (NPT) ensemble. The present theory expands our applications of analysis including pressure dependence of the spectra. We verified that the present theory yields accurate difference spectra in the NPT condition as well, with remarkable computational efficiency over the straightforward subtraction by several orders of magnitude. This method is further applied to vibrational spectra of liquid water with varying pressure and succeeded in reproducing tiny difference spectra by pressure change. The anomalous pressure dependence is elucidated in relation to other properties of liquid water.
Computational path planner for product assembly in complex environments
NASA Astrophysics Data System (ADS)
Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi
2013-03-01
Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.
Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2003-01-01
The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
Standard biological parts knowledgebase.
Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M; Gennari, John H
2011-02-24
We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
12 CFR Appendix G to Part 226 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Open-End Model Forms and Clauses G Appendix G... RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. G Appendix G to Part 226—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 226.6 and 226.7) G-1...
Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-01-01
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946
Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-09-09
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.
Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.
2010-01-01
The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.
[A computer aided design approach of all-ceramics abutment for maxilla central incisor].
Sun, Yu-chun; Zhao, Yi-jiao; Wang, Yong; Han, Jing-yun; Lin, Ye; Lü, Pei-jun
2010-10-01
To establish the computer aided design (CAD) software platform of individualized abutment for the maxilla central incisor. Three-dimentional data of the incisor was collected by scanning and geometric transformation. Data mainly included the occlusal part of the healing abutment, the location carinae of the bedpiece, the occlusal 1/3 part of the artificial gingiva's inner surface, and so on. The all-ceramic crown designed in advanced was "virtual cutback" to get the original data of the abutment's supragingival part. The abutment's in-gum part was designed to simulate the individual natural tooth root. The functions such as "data offset", "bi-rail sweep surface" and "loft surface" were used in the process of CAD. The CAD route of the individualized all-ceramic abutment was set up. The functions and application methods were decided and the complete CAD process was realized. The software platform was basically set up according to the requests of the dental clinic.
Experimental magic state distillation for fault-tolerant quantum computing.
Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond
2011-01-25
Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
Imaging in anatomy: a comparison of imaging techniques in embalmed human cadavers
2013-01-01
Background A large variety of imaging techniques is an integral part of modern medicine. Introducing radiological imaging techniques into the dissection course serves as a basis for improved learning of anatomy and multidisciplinary learning in pre-clinical medical education. Methods Four different imaging techniques (ultrasound, radiography, computed tomography, and magnetic resonance imaging) were performed in embalmed human body donors to analyse possibilities and limitations of the respective techniques in this peculiar setting. Results The quality of ultrasound and radiography images was poor, images of computed tomography and magnetic resonance imaging were of good quality. Conclusion Computed tomography and magnetic resonance imaging have a superior image quality in comparison to ultrasound and radiography and offer suitable methods for imaging embalmed human cadavers as a valuable addition to the dissection course. PMID:24156510
Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations
NASA Technical Reports Server (NTRS)
Melson, N. D.; Keller, J. D.
1983-01-01
Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.
Prediction of unsteady transonic flow around missile configurations
NASA Technical Reports Server (NTRS)
Nixon, D.; Reisenthel, P. H.; Torres, T. O.; Klopfer, G. H.
1990-01-01
This paper describes the preliminary development of a method for predicting the unsteady transonic flow around missiles at transonic and supersonic speeds, with the final goal of developing a computer code for use in aeroelastic calculations or during maneuvers. The basic equations derived for this method are an extension of those derived by Klopfer and Nixon (1989) for steady flow and are a subset of the Euler equations. In this approach, the five Euler equations are reduced to an equation similar to the three-dimensional unsteady potential equation, and a two-dimensional Poisson equation. In addition, one of the equations in this method is almost identical to the potential equation for which there are well tested computer codes, allowing the development of a prediction method based in part on proved technology.
Scope and applications of translation invariant wavelets to image registration
NASA Technical Reports Server (NTRS)
Chettri, Samir; LeMoigne, Jacqueline; Campbell, William
1997-01-01
The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.
Document Form and Character Recognition using SVM
NASA Astrophysics Data System (ADS)
Park, Sang-Sung; Shin, Young-Geun; Jung, Won-Kyo; Ahn, Dong-Kyu; Jang, Dong-Sik
2009-08-01
Because of development of computer and information communication, EDI (Electronic Data Interchange) has been developing. There is OCR (Optical Character Recognition) of Pattern recognition technology for EDI. OCR contributed to changing many manual in the past into automation. But for the more perfect database of document, much manual is needed for excluding unnecessary recognition. To resolve this problem, we propose document form based character recognition method in this study. Proposed method is divided into document form recognition part and character recognition part. Especially, in character recognition, change character into binarization by using SVM algorithm and extract more correct feature value.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
NASA Astrophysics Data System (ADS)
Sattarpanah Karganroudi, Sasan
The competitive industrial market demands manufacturing companies to provide the markets with a higher quality of production. The quality control department in industrial sectors verifies geometrical requirements of products with consistent tolerances. These requirements are presented in Geometric Dimensioning and Tolerancing (GD&T) standards. However, conventional measuring and dimensioning methods for manufactured parts are time-consuming and costly. Nowadays manual and tactile measuring methods have been replaced by Computer-Aided Inspection (CAI) methods. The CAI methods apply improvements in computational calculations and 3-D data acquisition devices (scanners) to compare the scan mesh of manufactured parts with the Computer-Aided Design (CAD) model. Metrology standards, such as ASME-Y14.5 and ISO-GPS, require implementing the inspection in free-state, wherein the part is only under its weight. Non-rigid parts are exempted from the free-state inspection rule because of their significant geometrical deviation in a free-state with respect to the tolerances. Despite the developments in CAI methods, inspection of non-rigid parts still remains a serious challenge. Conventional inspection methods apply complex fixtures for non-rigid parts to retrieve the functional shape of these parts on physical fixtures; however, the fabrication and setup of these fixtures are sophisticated and expensive. The cost of fixtures has doubled since the client and manufacturing sectors require repetitive and independent inspection fixtures. To eliminate the need for costly and time-consuming inspection fixtures, fixtureless inspection methods of non-rigid parts based on CAI methods have been developed. These methods aim at distinguishing flexible deformations of parts in a free-state from defects. Fixtureless inspection methods are required to be automatic, reliable, reasonably accurate and repeatable for non-rigid parts with complex shapes. The scan model, which is acquired as point clouds, represent the shape of a part in a free-state. Afterward, the inspection of defects is performed by comparing the scan and CAD models, but these models are presented in different coordinate systems. Indeed, the scan model is presented in the measurement coordinate system whereas the CAD model is introduced in the designed coordinate system. To accomplish the inspection and facilitate an accurate comparison between the models, the registration process is required to align the scan and CAD models in a common coordinate system. The registration includes a virtual compensation for the flexible deformation of the parts in a free-state. Then, the inspection is implemented as a geometrical comparison between the CAD and scan models. This thesis focuses on developing automatic and accurate fixtureless CAI methods for non-rigid parts along with assessing the robustness of the methods. To this end, an automatic fixtureless CAI method for non-rigid parts based on filtering registration points is developed to identify and quantify defects more accurately on the surface of scan models. The flexible deformation of parts in a free-state in our developed automatic fixtureless CAI method is compensated by applying FE non-rigid Registration (FENR) to deform the CAD model towards the scan mesh. The displacement boundary conditions (BCs) for FENR are determined based on the corresponding sample points, which are generated by the Generalized Numerical Inspection Fixture (GNIF) method on the CAD and scan models. These corresponding sample points are evenly distributed on the surface of the models. The comparison between this deformed CAD model and the scan mesh intend to evaluate and quantify the defects on the scan model. However, some sample points can be located close or on defect areas which result in an inaccurate estimation of defects. These sample points are automatically filtered out in our CAI method based on curvature and von Mises stress criteria. Once filtered out, the remaining sample points are used in a new FENR, which allows an accurate evaluation of defects with respect to the tolerances. The performance and robustness of all CAI methods are generally required to be assessed with respect to the actual measurements. This thesis also introduces a new validation metric for Verification and Validation (V&V) of CAI methods based on ASME recommendations. The developed V&V approach uses a nonparametric statistical hypothesis test, namely the Kolmogorov-Smirnov (K-S) test. In addition to validating the defects size, the K-S test allows a deeper evaluation based on distance distribution of defects. The robustness of CAI method with respect to uncertainties such as scanning noise is quantitatively assessed using the developed validation metric. Due to the compliance of non-rigid parts, a geometrically deviated part can still be assembled in the assembly-state. This thesis also presents a fixtureless CAI method for geometrically deviated (presenting defects) non-rigid parts to evaluate the feasibility of mounting these parts in the functional assembly-state. Our developed Virtual Mounting Assembly-State Inspection (VMASI) method performs a non-rigid registration to virtually mount the scan mesh in assembly-state. To this end, the point clouds of scan model representing the part in a free-state is deformed to meet the assembly constraints such as fixation position (e.g. mounting holes). In some cases, the functional shape of a deviated part can be retrieved by applying assembly loads, which are limited to permissible loads, on the surface of the part. The required assembly loads are estimated through our developed Restraining Pressures Optimization (RPO) aiming at displacing the deviated scan model to achieve the tolerance for mounting holes. Therefore, the deviated scan model can be assembled if the mounting holes on the predicted functional shape of scan model attain the tolerance range. Different industrial parts are used to evaluate the performance of our developed methods in this thesis. The automatic inspection for identifying different types of small (local) and big (global) defects on the parts results in an accurate evaluation of defects. The robustness of this inspection method is also validated with respect to different levels of scanning noise, which shows promising results. Meanwhile, the VMASI method is performed on various parts with different types of defects, which concludes that in some cases the functional shape of deviated parts can be retrieved by mounting them on a virtual fixture in assembly-state under restraining loads.
Methods in Symbolic Computation and p-Adic Valuations of Polynomials
NASA Astrophysics Data System (ADS)
Guan, Xiao
Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
NASA Astrophysics Data System (ADS)
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
NASA Astrophysics Data System (ADS)
MacLean, L. S.; Romanowicz, B. A.; French, S.
2015-12-01
Seismic wavefield computations using the Spectral Element Method are now regularly used to recover tomographic images of the upper mantle and crust at the local, regional, and global scales (e.g. Fichtner et al., GJI, 2009; Tape et al., Science 2010; Lekic and Romanowicz, GJI, 2011; French and Romanowicz, GJI, 2014). However, the heaviness of the computations remains a challenge, and contributes to limiting the resolution of the produced images. Using source stacking, as suggested by Capdeville et al. (GJI,2005), can considerably speed up the process by reducing the wavefield computations to only one per each set of N sources. This method was demonstrated through synthetic tests on low frequency datasets, and therefore should work for global mantle tomography. However, the large amplitudes of surface waves dominates the stacked seismograms and these cases can no longer be separated by windowing in the time domain. We have developed a processing approach that helps address this issue and demonstrate its usefulness through a series of synthetic tests performed at long periods (T >60 s) on toy upper mantle models. The summed synthetics are computed using the CSEM code (Capdeville et al., 2002). As for the inverse part of the procedure, we use a quasi-Newton method, computing Frechet derivatives and Hessian using normal mode perturbation theory.
1991-02-01
Shamos, M I , "Computational Geometry", Ph.D Thesis , Department of Computer Science, Yale University, New Haven CT, 1978. [53] Steiglitz, K., An...431) whose real and imaginary parts are given by 222 mj cos OmJ + Az -mL cos 2 ML + MS Cos 2MS (432) mj sinO 0M cose OM = L sin aML cos ML + m S sin 9...Aequationes Math. 14, 1976, 271-291. 5. Greenwell, C.E., Finite element methods for partial integro-differential equations, Ph.D. Thesis , University of
Higgs-differential cross section at NNLO in dimensional regularisation
Dulat, Falko; Lionetti, Simone; Mistlberger, Bernhard; ...
2017-07-05
We present an analytic computation of the Higgs production cross section in the gluon fusion channel, which is differential in the components of the Higgs momentum and inclusive in the associated partonic radiation through NNLO in perturbative QCD. Our computation includes the necessary higher order terms in the dimensional regulator beyond the finite part that are required for renormalisation and collinear factorisation at N 3LO. We outline in detail the computational methods which we employ. We present numerical predictions for realistic final state observables, specifically distributions for the decay products of the Higgs boson in the γγ decay channel.
NASA Astrophysics Data System (ADS)
Vereshchagin, Gregory V.; Aksenov, Alexey G.
2017-02-01
Preface; Acknowledgements; Acronyms and definitions; Introduction; Part I. Theoretical Foundations: 1. Basic concepts; 2. Kinetic equation; 3. Averaging; 4. Conservation laws and equilibrium; 5. Relativistic BBGKY hierarchy; 6. Basic parameters in gases and plasmas; Part II. Numerical Methods: 7. The basics of computational physics; 8. Direct integration of Boltzmann equations; 9. Multidimensional hydrodynamics; Part III. Applications: 10. Wave dispersion in relativistic plasma; 11. Thermalization in relativistic plasma; 12. Kinetics of particles in strong fields; 13. Compton scattering in astrophysics and cosmology; 14. Self-gravitating systems; 15. Neutrinos, gravitational collapse and supernovae; Appendices; Bibliography; Index.
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
Prediction of destination entry and retrieval times using keystroke-level models
DOT National Transportation Integrated Search
1998-04-01
Thirty-six drivers entered and retrieved destinations using an Ali-Scout navigation computer. Retrieval involved keying in part of the destination name, scrolling through a list of names, or a combination of those methods. Entry required keying in th...
TomoBank: a tomographic data repository for computational x-ray science
NASA Astrophysics Data System (ADS)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark
2018-03-01
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Alexander, Steven; Coldwell, R. L.
2015-03-01
The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.
NASA Astrophysics Data System (ADS)
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
ERIC Educational Resources Information Center
ALTMANN, BERTHOLD; BROWN, WILLIAM G.
THE FIRST-GENERATION APPROACH BY CONCEPT (ABC) STORAGE AND RETRIEVAL METHOD, A METHOD WHICH UTILIZES AS A SUBJECT APPROACH APPROPRIATE STANDARDIZED ENGLISH-LANGUAGE STATEMENTS PROCESSED AND PRINTED IN A PERMUTED INDEX FORMAT, UNDERWENT A PERFORMANCE TEST, THE PRIMARY OBJECTIVE OF WHICH WAS TO SPOT DEFICIENCIES AND TO DEVELOP A SECOND-GENERATION…
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.; Coleman, R. G.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This user's manual contains a description of the system, an explanation of its usage, the input definition, and example output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, L.C.
The ORCENT-II digital computer program will perform calculations at valves-wide-open design conditions, maximum guaranteed rating conditions, and an approximation of part-load conditions for steam turbine cycles supplied with throttle steam characteristic of contemporary light-water reactors. Turbine performance calculations are based on a method published by the General Electric Company. Output includes all information normally shown on a turbine-cycle heat balance diagram. The program is written in FORTRAN IV for the IBM System 360 digital computers at the Oak Ridge National Laboratory.
Planetary Data Workshop, Part 2
NASA Technical Reports Server (NTRS)
1984-01-01
Technical aspects of the Planetary Data System (PDS) are addressed. Methods and tools for maintaining and accessing large, complex sets of data are discussed. The specific software and applications needed for processing imaging and non-imaging science data are reviewed. The need for specific software that provides users with information on the location and geometry of scientific observations is discussed. Computer networks and user interface to the PDS are covered along with Computer hardware available to this data system.
NASA Astrophysics Data System (ADS)
Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan
2016-11-01
In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-06-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
NASA Astrophysics Data System (ADS)
Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf
2018-04-01
For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
PREFACE: New trends in Computer Simulations in Physics and not only in physics
NASA Astrophysics Data System (ADS)
Shchur, Lev N.; Krashakov, Serge A.
2016-02-01
In this volume we have collected papers based on the presentations given at the International Conference on Computer Simulations in Physics and beyond (CSP2015), held in Moscow, September 6-10, 2015. We hope that this volume will be helpful and scientifically interesting for readers. The Conference was organized for the first time with the common efforts of the Moscow Institute for Electronics and Mathematics (MIEM) of the National Research University Higher School of Economics, the Landau Institute for Theoretical Physics, and the Science Center in Chernogolovka. The name of the Conference emphasizes the multidisciplinary nature of computational physics. Its methods are applied to the broad range of current research in science and society. The choice of venue was motivated by the multidisciplinary character of the MIEM. It is a former independent university, which has recently become the part of the National Research University Higher School of Economics. The Conference Computer Simulations in Physics and beyond (CSP) is planned to be organized biannually. This year's Conference featured 99 presentations, including 21 plenary and invited talks ranging from the analysis of Irish myths with recent methods of statistical physics, to computing with novel quantum computers D-Wave and D-Wave2. This volume covers various areas of computational physics and emerging subjects within the computational physics community. Each section was preceded by invited talks presenting the latest algorithms and methods in computational physics, as well as new scientific results. Both parallel and poster sessions paid special attention to numerical methods, applications and results. For all the abstracts presented at the conference please follow the link http://csp2015.ac.ru/files/book5x.pdf
Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.
Tong, Liping; Thompson, Elizabeth
2008-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel
Multilocus Lod Scores in Large Pedigrees: Combination of Exact and Approximate Calculations
Tong, Liping; Thompson, Elizabeth
2007-01-01
To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some ‘key’ individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. PMID:17934317
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
3D image acquisition by fiber-based fringe projection
NASA Astrophysics Data System (ADS)
Pfeifer, Tilo; Driessen, Sascha
2005-02-01
In macroscopic production processes several measuring methods are used to assure the quality of 3D parts. Definitely, one of the most widespread techniques is the fringe projection. It"s a fast and accurate method to receive the topography of a part as a computer file which can be processed in further steps, e.g. to compare the measured part to a given CAD file. In this article it will be shown how the fringe projection method is applied to a fiber-optic system. The fringes generated by a miniaturized fringe projector (MiniRot) are first projected onto the front-end of an image guide using special optics. The image guide serves as a transmitter for the fringes in order to get them onto the surface of a micro part. A second image guide is used to observe the micro part. It"s mounted under an angle relating to the illuminating image guide so that the triangulation condition is fulfilled. With a CCD camera connected to the second image guide the projected fringes are recorded and those data is analyzed by an image processing system.
Stehle, A; Gross, M
1998-12-01
With the increasing capacity of personal computers more and more multimedia training programs are becoming available which make use of these possibilities. Computer-based presentation is usually interesting because it is visually attractive. However, the extent to which computer-based training programs correspond to international standards of quality of software ergonomics has never been the subject of systematic research. Another question is how much these programs motivate learning and what increase in knowledge can be achieved by using them. Using a multimedia interactive training program developed in our facility, 100 medical students were asked to evaluate the program after they had been using it for about one hour. In a questionnaire they first rated suitability for the task, self-descriptiveness, controllability, conformity with user expectation, error tolerance, suitability for individualization, and suitability for learning on a bipolar scale from "---" to "+3" (in numbers 1, worst result, to 7, best result). The median values achieved were rated between 6.0 and 6.2--software ergonomic criteria of the program ranged from good to very good. The second part was a subjective evaluation of the program's ability to deliver "medical knowledge which is relevant for the exam" (median = 6.0), "knowledge about systematic procedure in medicine" (median = 5.5), "knowledge about sensible use of diagnostic methods" (median = 6.0), "knowledge about clinical methods", and "experience with selective learning" (median = 6.0). This part was also rated good to very good. The third part of the questionnaire involved a pretest-posttest comparison. Two groups of students were asked how much benefit they had achieved by using the program. It was shown that the students were able to answer the exam questions significantly better than the control questions after they had used the program. This study confirms that the interactive computer-based training program is very well suited for providing knowledge in on appealing manner in an instructional setting.
Vareková, R Svobodová; Koca, J
2006-02-01
The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.
A first-principle calculation of the XANES spectrum of Cu2+ in water
NASA Astrophysics Data System (ADS)
La Penna, G.; Minicozzi, V.; Morante, S.; Rossi, G. C.; Stellato, F.
2015-09-01
The progress in high performance computing we are witnessing today offers the possibility of accurate electron density calculations of systems in realistic physico-chemical conditions. In this paper, we present a strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum based on the density functional theory calculation of the electronic potential. To test its effectiveness, we apply the method to the computation of the X-ray absorption near edge structure part of the XAS spectrum in the paradigmatic, but simple case of Cu2+ in water. In order to keep into account the effect of the metal site structure fluctuations in determining the experimental signal, the theoretical spectrum is evaluated as the average over the computed spectra of a statistically significant number of simulated metal site configurations. The comparison of experimental data with theoretical calculations suggests that Cu2+ lives preferentially in a square-pyramidal geometry. The remarkable success of this approach in the interpretation of XAS data makes us optimistic about the possibility of extending the computational strategy we have outlined to the more interesting case of molecules of biological relevance bound to transition metal ions.
Thimmaiah, Tim; Voje, William E; Carothers, James M
2015-01-01
With progress toward inexpensive, large-scale DNA assembly, the demand for simulation tools that allow the rapid construction of synthetic biological devices with predictable behaviors continues to increase. By combining engineered transcript components, such as ribosome binding sites, transcriptional terminators, ligand-binding aptamers, catalytic ribozymes, and aptamer-controlled ribozymes (aptazymes), gene expression in bacteria can be fine-tuned, with many corollaries and applications in yeast and mammalian cells. The successful design of genetic constructs that implement these kinds of RNA-based control mechanisms requires modeling and analyzing kinetically determined co-transcriptional folding pathways. Transcript design methods using stochastic kinetic folding simulations to search spacer sequence libraries for motifs enabling the assembly of RNA component parts into static ribozyme- and dynamic aptazyme-regulated expression devices with quantitatively predictable functions (rREDs and aREDs, respectively) have been described (Carothers et al., Science 334:1716-1719, 2011). Here, we provide a detailed practical procedure for computational transcript design by illustrating a high throughput, multiprocessor approach for evaluating spacer sequences and generating functional rREDs. This chapter is written as a tutorial, complete with pseudo-code and step-by-step instructions for setting up a computational cluster with an Amazon, Inc. web server and performing the large numbers of kinefold-based stochastic kinetic co-transcriptional folding simulations needed to design functional rREDs and aREDs. The method described here should be broadly applicable for designing and analyzing a variety of synthetic RNA parts, devices and transcripts.
SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.D.; Scherrer, B; Don, S
Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient.more » The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.« less
49 CFR Appendix A to Part 227 - Noise Exposure Computation
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Noise Exposure Computation A Appendix A to Part... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. A Appendix A to Part 227—Noise Exposure Computation This appendix is mandatory. I. Computation of Employee Noise Exposure A...
49 CFR Appendix A to Part 227 - Noise Exposure Computation
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Noise Exposure Computation A Appendix A to Part... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. A Appendix A to Part 227—Noise Exposure Computation This appendix is mandatory. I. Computation of Employee Noise Exposure A...
Numerical characteristics of quantum computer simulation
NASA Astrophysics Data System (ADS)
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
N-person differential games. Part 1: Duality-finite element methods
NASA Technical Reports Server (NTRS)
Chen, G.; Zheng, Q.
1983-01-01
The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem.
Roy, Tapta Kanchan; Sharma, Rahul; Gerber, R Benny
2016-01-21
First-principles quantum calculations for anharmonic vibrational spectroscopy of three protected dipeptides are carried out and compared with experimental data. Using hybrid HF/MP2 potentials, the Vibrational Self-Consistent Field with Second-Order Perturbation Correction (VSCF-PT2) algorithm is used to compute the spectra without any ad hoc scaling or fitting. All of the vibrational modes (135 for the largest system) are treated quantum mechanically and anharmonically using full pair-wise coupling potentials to represent the interaction between different modes. In the hybrid potential scheme the MP2 method is used for the harmonic part of the potential and a modified HF method is used for the anharmonic part. The overall agreement between computed spectra and experiment is very good and reveals different signatures for different conformers. This study shows that first-principles spectroscopic calculations of good accuracy are possible for dipeptides hence it opens possibilities for determination of dipeptide conformer structures by comparison of spectroscopic calculations with experiment.
Double-multiple streamtube model for Darrieus in turbines
NASA Technical Reports Server (NTRS)
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Three-Dimensional Computed Tomography as a Method for Finding Die Attach Voids in Diodes
NASA Technical Reports Server (NTRS)
Brahm, E. N.; Rolin, T. D.
2010-01-01
NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. The diode is an EEE part critical to NASA missions that can fail due to excessive voiding in the die attach. Metallography, one established method for studying the die attach, is a time-intensive, destructive, and equivocal process whereby mechanical grinding of the diodes is performed to reveal voiding in the die attach. Problems such as die attach pull-out tend to complicate results and can lead to erroneous conclusions. The objective of this study is to determine if three-dimensional computed tomography (3DCT), a nondestructive technique, is a viable alternative to metallography for detecting die attach voiding. The die attach voiding in two- dimensional planes created from 3DCT scans was compared to several physical cross sections of the same diode to determine if the 3DCT scan accurately recreates die attach volumetric variability
Ligand design by a combinatorial approach based on modeling and experiment: application to HLA-DR4
NASA Astrophysics Data System (ADS)
Evensen, Erik; Joseph-McCarthy, Diane; Weiss, Gregory A.; Schreiber, Stuart L.; Karplus, Martin
2007-07-01
Combinatorial synthesis and large scale screening methods are being used increasingly in drug discovery, particularly for finding novel lead compounds. Although these "random" methods sample larger areas of chemical space than traditional synthetic approaches, only a relatively small percentage of all possible compounds are practically accessible. It is therefore helpful to select regions of chemical space that have greater likelihood of yielding useful leads. When three-dimensional structural data are available for the target molecule this can be achieved by applying structure-based computational design methods to focus the combinatorial library. This is advantageous over the standard usage of computational methods to design a small number of specific novel ligands, because here computation is employed as part of the combinatorial design process and so is required only to determine a propensity for binding of certain chemical moieties in regions of the target molecule. This paper describes the application of the Multiple Copy Simultaneous Search (MCSS) method, an active site mapping and de novo structure-based design tool, to design a focused combinatorial library for the class II MHC protein HLA-DR4. Methods for the synthesizing and screening the computationally designed library are presented; evidence is provided to show that binding was achieved. Although the structure of the protein-ligand complex could not be determined, experimental results including cross-exclusion of a known HLA-DR4 peptide ligand (HA) by a compound from the library. Computational model building suggest that at least one of the ligands designed and identified by the methods described binds in a mode similar to that of native peptides.
Comparing DNA damage-processing pathways by computer analysis of chromosome painting data.
Levy, Dan; Vazquez, Mariel; Cornforth, Michael; Loucas, Bradford; Sachs, Rainer K; Arsuaga, Javier
2004-01-01
Chromosome aberrations are large-scale illegitimate rearrangements of the genome. They are indicative of DNA damage and informative about damage processing pathways. Despite extensive investigations over many years, the mechanisms underlying aberration formation remain controversial. New experimental assays such as multiplex fluorescent in situ hybridyzation (mFISH) allow combinatorial "painting" of chromosomes and are promising for elucidating aberration formation mechanisms. Recently observed mFISH aberration patterns are so complex that computer and graph-theoretical methods are needed for their full analysis. An important part of the analysis is decomposing a chromosome rearrangement process into "cycles." A cycle of order n, characterized formally by the cyclic graph with 2n vertices, indicates that n chromatin breaks take part in a single irreducible reaction. We here describe algorithms for computing cycle structures from experimentally observed or computer-simulated mFISH aberration patterns. We show that analyzing cycles quantitatively can distinguish between different aberration formation mechanisms. In particular, we show that homology-based mechanisms do not generate the large number of complex aberrations, involving higher-order cycles, observed in irradiated human lymphocytes.
Development of a Hybrid RANS/LES Method for Turbulent Mixing Layers
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
Significant research has been underway for several years in NASA Glenn Research Center's nozzle branch to develop advanced computational methods for simulating turbulent flows in exhaust nozzles. The primary efforts of this research have concentrated on improving our ability to calculate the turbulent mixing layers that dominate flows both in the exhaust systems of modern-day aircraft and in those of hypersonic vehicles under development. As part of these efforts, a hybrid numerical method was recently developed to simulate such turbulent mixing layers. The method developed here is intended for configurations in which a dominant structural feature provides an unsteady mechanism to drive the turbulent development in the mixing layer. Interest in Large Eddy Simulation (LES) methods have increased in recent years, but applying an LES method to calculate the wide range of turbulent scales from small eddies in the wall-bounded regions to large eddies in the mixing region is not yet possible with current computers. As a result, the hybrid method developed here uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall-bounded regions entering a mixing section and uses a LES procedure to calculate the mixing-dominated regions. A numerical technique was developed to enable the use of the hybrid RANS-LES method on stretched, non-Cartesian grids. With this technique, closure for the RANS equations is obtained by using the Cebeci-Smith algebraic turbulence model in conjunction with the wall-function approach of Ota and Goldberg. The LES equations are closed using the Smagorinsky subgrid scale model. Although the function of the Cebeci-Smith model to replace all of the turbulent stresses is quite different from that of the Smagorinsky subgrid model, which only replaces the small subgrid turbulent stresses, both are eddy viscosity models and both are derived at least in part from mixing-length theory. The similar formulation of these two models enables the RANS and LES equations to be solved with a single solution scheme and computational grid. The hybrid RANS-LES method has been applied to a benchmark compressible mixing layer experiment in which two isolated supersonic streams, separated by a splitter plate, provide the flows to a constant-area mixing section. Although the configuration is largely two dimensional in nature, three-dimensional calculations were found to be necessary to enable disturbances to develop in three spatial directions and to transition to turbulence. The flow in the initial part of the mixing section consists of a periodic vortex shedding downstream of the splitter plate trailing edge. This organized vortex shedding then rapidly transitions to a turbulent structure, which is very similar to the flow development observed in the experiments. Although the qualitative nature of the large-scale turbulent development in the entire mixing section is captured well by the LES part of the current hybrid method, further efforts are planned to directly calculate a greater portion of the turbulence spectrum and to limit the subgrid scale modeling to only the very small scales. This will be accomplished by the use of higher accuracy solution schemes and more powerful computers, measured both in speed and memory capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less
Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.
2006-01-01
The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Efficient Fluid Dynamic Design Optimization Using Cartesian Grids
NASA Technical Reports Server (NTRS)
Dadone, A.; Grossman, B.; Sellers, Bill (Technical Monitor)
2004-01-01
This report is subdivided in three parts. The first one reviews a new approach to the computation of inviscid flows using Cartesian grid methods. The crux of the method is the curvature-corrected symmetry technique (CCST) developed by the present authors for body-fitted grids. The method introduces ghost cells near the boundaries whose values are developed from an assumed flow-field model in vicinity of the wall consisting of a vortex flow, which satisfies the normal momentum equation and the non-penetration condition. The CCST boundary condition was shown to be substantially more accurate than traditional boundary condition approaches. This improved boundary condition is adapted to a Cartesian mesh formulation, which we call the Ghost Body-Cell Method (GBCM). In this approach, all cell centers exterior to the body are computed with fluxes at the four surrounding cell edges. There is no need for special treatment corresponding to cut cells which complicate other Cartesian mesh methods.
Cardiac magnetic resonance imaging and computed tomography in ischemic cardiomyopathy: an update*
Assunção, Fernanda Boldrini; de Oliveira, Diogo Costa Leandro; Souza, Vitor Frauches; Nacif, Marcelo Souto
2016-01-01
Ischemic cardiomyopathy is one of the major health problems worldwide, representing a significant part of mortality in the general population nowadays. Cardiac magnetic resonance imaging (CMRI) and cardiac computed tomography (CCT) are noninvasive imaging methods that serve as useful tools in the diagnosis of coronary artery disease and may also help in screening individuals with risk factors for developing this illness. Technological developments of CMRI and CCT have contributed to the rise of several clinical indications of these imaging methods complementarily to other investigation methods, particularly in cases where they are inconclusive. In terms of accuracy, CMRI and CCT are similar to the other imaging methods, with few absolute contraindications and minimal risks of adverse side-effects. This fact strengthens these methods as powerful and safe tools in the management of patients. The present study is aimed at describing the role played by CMRI and CCT in the diagnosis of ischemic cardiomyopathies. PMID:26929458
Standard Biological Parts Knowledgebase
Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M.; Gennari, John H.
2011-01-01
We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate “promoter” parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible. PMID:21390321
High-Performance High-Order Simulation of Wave and Plasma Phenomena
NASA Astrophysics Data System (ADS)
Klockner, Andreas
This thesis presents results aiming to enhance and broaden the applicability of the discontinuous Galerkin ("DG") method in a variety of ways. DG was chosen as a foundation for this work because it yields high-order finite element discretizations with very favorable numerical properties for the treatment of hyperbolic conservation laws. In a first part, I examine progress that can be made on implementation aspects of DG. In adapting the method to mass-market massively parallel computation hardware in the form of graphics processors ("GPUs"), I obtain an increase in computation performance per unit of cost by more than an order of magnitude over conventional processor architectures. Key to this advance is a recipe that adapts DG to a variety of hardware through automated self-tuning. I discuss new parallel programming tools supporting GPU run-time code generation which are instrumental in the DG self-tuning process and contribute to its reaching application floating point throughput greater than 200 GFlops/s on a single GPU and greater than 3 TFlops/s on a 16-GPU cluster in simulations of electromagnetics problems in three dimensions. I further briefly discuss the solver infrastructure that makes this possible. In the second part of the thesis, I introduce a number of new numerical methods whose motivation is partly rooted in the opportunity created by GPU-DG: First, I construct and examine a novel GPU-capable shock detector, which, when used to control an artificial viscosity, helps stabilize DG computations in gas dynamics and a number of other fields. Second, I describe my pursuit of a method that allows the simulation of rarefied plasmas using a DG discretization of the electromagnetic field. Finally, I introduce new explicit multi-rate time integrators for ordinary differential equations with multiple time scales, with a focus on applicability to DG discretizations of time-dependent problems.
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
Use of parallel computing in mass processing of laser data
NASA Astrophysics Data System (ADS)
Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.
2015-12-01
The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, D; Cote, G; Mascolo-Fortin, J
2016-06-15
Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less
Gega, L; Norman, I J; Marks, I M
2007-03-01
Exposure therapy is effective for phobic anxiety disorders (specific phobias, agoraphobia, social phobia) and panic disorder. Despite their high prevalence in the community, sufferers often get no treatment or if they do, it is usually after a long delay. This is largely due to the scarcity of healthcare professionals trained in exposure therapy, which is due, in part, to the high cost of training. Traditional teaching methods employed are labour intensive, being based mainly on role-play in small groups with feedback and coaching from experienced trainers. In an attempt to increase knowledge and skills in exposure therapy, there is now some interest in providing relevant teaching as part of pre-registration nurse education. Computers have been developed to teach terminology and simulate clinical scenarios for health professionals, and offer a potentially cost effective alternative to traditional teaching methods. To test whether student nurses would learn about exposure therapy for phobia/panic as well by computer-aided self-instruction as by face-to-face teaching, and to compare the individual and combined effects of two educational methods, traditional face-to-face teaching comprising a presentation with discussion and questions/answers by a specialist cognitive behaviour nurse therapist, and a computer-aided self-instructional programme based on a self-help programme for patients with phobia/panic called FearFighter, on students' knowledge, skills and satisfaction. Randomised controlled trial, with a crossover, completed in 2 consecutive days over a period of 4h per day. Ninety-two mental health pre-registration nursing students, of mixed gender, age and ethnic origin, with no previous training in cognitive behaviour therapy studying at one UK university. The two teaching methods led to similar improvements in knowledge and skills, and to similar satisfaction, when used alone. Using them in tandem conferred no added benefit. Computer-aided self-instruction was more efficient as it saved teacher preparation and delivery time, and needed no specialist tutor. Computer-aided self-instruction saved almost all preparation time and delivery effort for the expert teacher. When added to past results in medical students, the present results in nurses justify the use of computer-aided self-instruction for learning about exposure therapy and phobia/panic and of research into its value for other areas of health education.
Acoustic Signature from Flames as a Combustion Diagnostic Tool
1983-11-01
empirical visual flame length had to be input to the computer for the inversion method to give good results. That is, if the experiment cnd inversion...method were asked to yield the flame length , poor results were obtained. Since this wa3 part of the information sought for practical application of the...to small experimental uncertainty. The method gave reasonably good results for the open flame but substantial input (the flame length ) had to be
Numerical analysis of laser ablation using the axisymmetric two-temperature model
NASA Astrophysics Data System (ADS)
Dziatkiewicz, Jolanta; Majchrzak, Ewa
2018-01-01
Laser ablation of the axisymmetric micro-domain is analyzed. To describe the thermal processes occurring in the micro-domain the two-temperature hyperbolic model supplemented by the boundary and initial conditions is used. This model takes into account the phase changes of material (solid-liquid and liquid-vapour) and the ablation process. At the stage of numerical computations the finite difference method with staggered grid is used. In the final part the results of computations are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swartz, J.D.; Goodman, R.S.; Russell, K.B.
1983-08-01
High-resolution computed tomography (CT) provides an excellent method for examination of the surgically altered middle ear and mastoid. Closed-cavity and open-cavity types of mastoidectomy are illustrated. Recurrent cholesteatoma in the mastoid bowl is easily diagnosed. Different types of tympanoplasty are discussed and illustrated, as are tympanostomy tubes and various ossicular reconstructive procedures. Baseline high-resolution CT of the postoperative middle ear and mastoid is recommended at approximately 3 months following the surgical procedure.
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
NASA Technical Reports Server (NTRS)
Ratner, R. S.; Shapiro, E. B.; Zeidler, H. M.; Wahlstrom, S. E.; Clark, C. B.; Goldberg, J.
1973-01-01
This final report summarizes the work on the design of a fault tolerant digital computer for aircraft. Volume 2 is composed of two parts. Part 1 is concerned with the computational requirements associated with an advanced commercial aircraft. Part 2 reviews the technology that will be available for the implementation of the computer in the 1975-1985 period. With regard to the computation task 26 computations have been categorized according to computational load, memory requirements, criticality, permitted down-time, and the need to save data in order to effect a roll-back. The technology part stresses the impact of large scale integration (LSI) on the realization of logic and memory. Also considered was module interconnection possibilities so as to minimize fault propagation.
Applications of Stochastic Analyses for Collaborative Learning and Cognitive Assessment
2007-04-01
models (Visser, Maartje, Raijmakers, & Molenaar , 2002). The second part of this paper illustrates two applications of the methods described in the...clustering three-way data sets. Computational Statistics and Data Analysis, 51 (11), 5368–5376. Visser, I., Maartje, E., Raijmakers, E. J., & Molenaar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Reddy, T. A.; Gurian, Patrick
2007-01-31
A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.
Really big data: Processing and analysis of large datasets
USDA-ARS?s Scientific Manuscript database
Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...
NASA Astrophysics Data System (ADS)
Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick
2016-04-01
We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.
Imaging simulation of active EO-camera
NASA Astrophysics Data System (ADS)
Pérez, José; Repasi, Endre
2018-04-01
A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.
40 CFR Appendix C to Part 66 - Computer Program
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part...) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer Program Note: For text of appendix C see appendix C to part 67. ...
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
NASA Astrophysics Data System (ADS)
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies.
NASA Technical Reports Server (NTRS)
Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Systematic review of computational methods for identifying miRNA-mediated RNA-RNA crosstalk.
Li, Yongsheng; Jin, Xiyun; Wang, Zishan; Li, Lili; Chen, Hong; Lin, Xiaoyu; Yi, Song; Zhang, Yunpeng; Xu, Juan
2017-10-25
Posttranscriptional crosstalk and communication between RNAs yield large regulatory competing endogenous RNA (ceRNA) networks via shared microRNAs (miRNAs), as well as miRNA synergistic networks. The ceRNA crosstalk represents a novel layer of gene regulation that controls both physiological and pathological processes such as development and complex diseases. The rapidly expanding catalogue of ceRNA regulation has provided evidence for exploitation as a general model to predict the ceRNAs in silico. In this article, we first reviewed the current progress of RNA-RNA crosstalk in human complex diseases. Then, the widely used computational methods for modeling ceRNA-ceRNA interaction networks are further summarized into five types: two types of global ceRNA regulation prediction methods and three types of context-specific prediction methods, which are based on miRNA-messenger RNA regulation alone, or by integrating heterogeneous data, respectively. To provide guidance in the computational prediction of ceRNA-ceRNA interactions, we finally performed a comparative study of different combinations of miRNA-target methods as well as five types of ceRNA identification methods by using literature-curated ceRNA regulation and gene perturbation. The results revealed that integration of different miRNA-target prediction methods and context-specific miRNA/gene expression profiles increased the performance for identifying ceRNA regulation. Moreover, different computational methods were complementary in identifying ceRNA regulation and captured different functional parts of similar pathways. We believe that the application of these computational techniques provides valuable functional insights into ceRNA regulation and is a crucial step for informing subsequent functional validation studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure
NASA Astrophysics Data System (ADS)
Szafran, J.; Juszczyk, K.; Kamiński, M.
2017-12-01
The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Calculus: A Computer Oriented Presentation, Part 1 [and] Part 2.
ERIC Educational Resources Information Center
Stenberg, Warren; Walker, Robert J.
Parts one and two of a one-year computer-oriented calculus course (without analytic geometry) are presented. The ideas of calculus are introduced and motivated through computer (i.e., algorithmic) concepts. An introduction to computing via algorithms and a simple flow chart language allows the book to be self-contained, except that material on…
NASA Technical Reports Server (NTRS)
1972-01-01
The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengchen; Settgast, Randolph R.; Johnson, Scott M.
2014-12-17
GEOS is a massively parallel, multi-physics simulation application utilizing high performance computing (HPC) to address subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS enables coupling of di erent solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. Developed at the Lawrence Livermore National Laboratory (LLNL) as a part of a Laboratory-Directed Research and Development (LDRD) Strategic Initiative (SI) project, GEOS represents the culmination of a multi-year ongoing code development and improvement e ort that hasmore » leveraged existing code capabilities and sta expertise to design new computational geosciences software.« less
NASA Astrophysics Data System (ADS)
Wisniewski, Nicholas Andrew
This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.
NASA Astrophysics Data System (ADS)
Palmesi, P.; Exl, L.; Bruckner, F.; Abert, C.; Suess, D.
2017-11-01
The long-range magnetic field is the most time-consuming part in micromagnetic simulations. Computational improvements can relieve problems related to this bottleneck. This work presents an efficient implementation of the Fast Multipole Method [FMM] for the magnetic scalar potential as used in micromagnetics. The novelty lies in extending FMM to linearly magnetized tetrahedral sources making it interesting also for other areas of computational physics. We treat the near field directly and in use (exact) numerical integration on the multipole expansion in the far field. This approach tackles important issues like the vectorial and continuous nature of the magnetic field. By using FMM the calculations scale linearly in time and memory.
Transonic flow analysis for rotors. Part 2: Three-dimensional, unsteady, full-potential calculation
NASA Technical Reports Server (NTRS)
Chang, I. C.
1985-01-01
A numerical method is presented for calculating the three-dimensional unsteady, transonic flow past a helicopter rotor blade of arbitrary geometry. The method solves the full-potential equations in a blade-fixed frame of reference by a time-marching implicit scheme. At the far-field, a set of first-order radiation conditions is imposed, thus minimizing the reflection of outgoing wavelets from computational boundaries. Computed results are presented to highlight radial flow effects in three dimensions, to compare surface pressure distributions to quasi-steady predictions, and to predict the flow field on a swept-tip blade. The results agree well with experimental data for both straight- and swept-tip blade geometries.
Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright
NASA Astrophysics Data System (ADS)
Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.
2007-02-01
Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.
Guided Versus Unguided Learning: Which One To Choose?
NASA Astrophysics Data System (ADS)
Speck, Angela; Ruzhitskaya, L.
2011-01-01
We present the results of a study that measures the effectiveness of two types of computer-based tutorials for teaching the concept of stellar parallax to non-science major students in a college-level introductory astronomy course. A number of previous studies on the use of computer technology in education suggested that a method of inquiry-based learning rooted in a discovery method must prevail over direct instruction. At the same time, a number of researchers raised a concern that the discovery approach especially in combination with interactive computer-based environments may present students with additional distractions and thus hinder the educational value of such interactions. This study was set to test the both approaches and to identify the preferable method for engaging students in active and meaningful learning. The study consisted of guided and unguided computer-based tutorials and used a control group in which students were engaged in paper-based exercises. The guided tutorial was an adaptive tutorial that was designed to respond to students’ input and to provide them with the next step: an exercise, an animated visualization, or a set of additional questions. The unguided tutorial allowed students to explore any part of the tutorial in any order. Both tutorials consisted of four parts and reviewed simple geometry, trigonometric parallax, angular sizes in astronomy, resolution and conversion of units, and had a concluding chapter on finding distance to a star. The control group used Lecture-Tutorials (Prather, et al) to learn angular sizes and stellar parallax. The efficacy of each treatment was validated through a 14-question pretest and two posttests to evaluate and contrast students’ immediate recall and their long-term knowledge and corroborated by a number of interviews with selected students. We present our preliminary results based on analyzed work of over 200 participants.
Estimating Temperature Rise Due to Flashlamp Heating Using Irreversible Temperature Indicators
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
1999-01-01
One of the nondestructive thermography inspection techniques uses photographic flashlamps. The flashlamps provide a short duration (about 0.005 sec) heat pulse. The short burst of energy results in a momentary rise in the surface temperature of the part. The temperature rise may be detrimental to the top layer of the part being exposed. Therefore, it is necessary to ensure the nondestructive nature of the technique. Amount of the temperature rise determines whether the flashlamp heating would be detrimental to the part. A direct method for the temperature measurement is to use of an infrared pyrometer that has much shorter response time than the flash duration. In this paper, an alternative technique is given using the irreversible temperature 'indicators. This is an indirect technique and it measures the temperature rise on the irreversible temperature indicators and computes the incident heat flux. Once the heat flux is known, the temperature rise on the part can be computed. A wedge shaped irreversible temperature indicator for measuring the heat flux is proposed. A procedure is given to use the wedge indicator.
Multi-Component Diffusion with Application To Computational Aerothermodynamics
NASA Technical Reports Server (NTRS)
Sutton, Kenneth; Gnoffo, Peter A.
1998-01-01
The accuracy and complexity of solving multicomponent gaseous diffusion using the detailed multicomponent equations, the Stefan-Maxwell equations, and two commonly used approximate equations have been examined in a two part study. Part I examined the equations in a basic study with specified inputs in which the results are applicable for many applications. Part II addressed the application of the equations in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) computational code for high-speed entries in Earth's atmosphere. The results showed that the presented iterative scheme for solving the Stefan-Maxwell equations is an accurate and effective method as compared with solutions of the detailed equations. In general, good accuracy with the approximate equations cannot be guaranteed for a species or all species in a multi-component mixture. 'Corrected' forms of the approximate equations that ensured the diffusion mass fluxes sum to zero, as required, were more accurate than the uncorrected forms. Good accuracy, as compared with the Stefan- Maxwell results, were obtained with the 'corrected' approximate equations in defining the heating rates for the three Earth entries considered in Part II.
Manual for obscuration code with space station applications
NASA Technical Reports Server (NTRS)
Marhefka, R. J.; Takacs, L.
1986-01-01
The Obscuration Code, referred to as SHADOW, is a user-oriented computer code to determine the case shadow of an antenna in a complex environment onto the far zone sphere. The surrounding structure can be composed of multiple composite cone frustums and multiply sided flat plates. These structural pieces are ideal for modeling space station configurations. The means of describing the geometry input is compatible with the NEC-BASIC Scattering Code. In addition, an interactive mode of operation has been provided for DEC VAX computers. The first part of this document is a user's manual designed to give a description of the method used to obtain the shadow map, to provide an overall view of the operation of the computer code, to instruct a user in how to model structures, and to give examples of inputs and outputs. The second part is a code manual that details how to set up the interactive and non-interactive modes of the code and provides a listing and brief description of each of the subroutines.
A world-wide databridge supported by a commercial cloud provider
NASA Astrophysics Data System (ADS)
Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio
2017-10-01
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.
Computers and the orthopaedic office.
Berumen, Edmundo; Barllow, Fidel Dobarganes; Fong, Fransisco Javier; Lopez, Jorge Arturo
2002-01-01
The advance of today's medicine could be linked very closely to the history of computers through the last twenty years. In the beginning the first attempt to build a computer was trying to help us with mathematical calculations. This has changed recently and computers are now linked to x-ray machines, CT scanners, and MRIs. Being able to share information is one of the goals of the future. Today's computer technology has helped a great deal to allow orthopaedic surgeons from around the world to consult on a difficult case or to become a part of a large database. Obtaining the results from a method of treatment using a multicentric information study can be done on a regular basis. In the future, computers will help us to retrieve information from patients' clinical history directly from a hospital database or by portable memory cards that will carry every radiograph or video from previous surgeries.
ERIC Educational Resources Information Center
Leach, Jenny
1996-01-01
The Open University of United Kingdom's Postgraduate Certificate of Education program is an 18-month, part-time course that annually trains over 1000 graduate teachers via electronic conferencing and open learning methods. The program provides every student and tutor with a Macintosh computer, printer, and modem and builds on face-to-face contacts…
There is a need to properly develop the application of Computational Fluid Dynamics (CFD) methods in support of air quality studies involving pollution sources near buildings at industrial sites. CFD models are emerging as a promising technology for such assessments, in part due ...
Detecting Satisficing in Online Surveys
ERIC Educational Resources Information Center
Salifu, Shani
2012-01-01
The proliferation of computers and high speed internet services are making online activities an integral part of peoples' lives as connect with friends, shop, and exchange data. The increasing ability of the internet to handle sophisticated data exchanges is endearing it to researchers interested in gathering all kinds of data. This method has the…
DOT National Transportation Integrated Search
2006-03-01
The previous study showed that many colors were used in air traffic control displays. We also found that colors were used mainly for three purposes: capturing controllers immediate attention, identifying targets, and segmenting information. This r...
Frozen-Orbital and Downfolding Calculations with Auxiliary-Field Quantum Monte Carlo.
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2013-11-12
We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings, compared to fully correlating all of the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also illustrate a generalization of the frozen-orbital approach that downfolds high-energy basis states to a physically relevant low-energy sector, which allows a systematic approach to produce realistic model Hamiltonians to further increase efficiency for extended systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balkey, K.; Witt, F.J.; Bishop, B.A.
1995-06-01
Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less
Exactly energy conserving semi-implicit particle in cell formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be
We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less
Sines and Cosines. Part 2 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1993-01-01
The Law of Sines and the Law of Cosines are introduced and demonstrated in this 'Project Mathematics' series video using both film footage and computer animation. This video deals primarily with the mathematical field of Trigonometry and explains how these laws were developed and their applications. One significant use is geographical and geological surveying. This includes both the triangulation method and the spirit leveling method. With these methods, it is shown how the height of the tallest mountain in the world, Mt. Everest, was determined.
A computer program for the localization of small areas in roentgenological images
NASA Technical Reports Server (NTRS)
Keller, R. A.; Baily, N. A.
1976-01-01
A method and associated algorithm are presented which allow a simple and accurate determination to be made of the location of small symmetric areas presented in roentgenological images. The method utilizes an operator to visually spot object positions but eliminates the need for critical positioning accuracy on the operator's part. The rapidity of measurement allows results to be evaluated on-line. Parameters associated with the algorithm have been analyzed, and methods to facilitate an optimum choice for any particular experimental setup are presented.
12 CFR 516.10 - How does OTS compute time periods under this part?
Code of Federal Regulations, 2010 CFR
2010-01-01
... APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing time periods under this part, OTS does not include the day of the act or event that commences the time... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this...
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
NASA Astrophysics Data System (ADS)
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
NASA Astrophysics Data System (ADS)
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Thresholding histogram equalization.
Chuang, K S; Chen, S; Hwang, I M
2001-12-01
The drawbacks of adaptive histogram equalization techniques are the loss of definition on the edges of the object and overenhancement of noise in the images. These drawbacks can be avoided if the noise is excluded in the equalization transformation function computation. A method has been developed to separate the histogram into zones, each with its own equalization transformation. This method can be used to suppress the nonanatomic noise and enhance only certain parts of the object. This method can be combined with other adaptive histogram equalization techniques. Preliminary results indicate that this method can produce images with superior contrast.
Synchronous computer mediated group discussion.
Gallagher, Peter
2005-01-01
Over the past 20 years, focus groups have become increasingly popular with nursing researchers as a data collection method, as has the use of computer-based technologies to support all forms of nursing research. This article describes the conduct of a series of focus groups in which the participants were in the same room as part of a "real-time" discussion during which they also used personal computers as an interface between each other and the moderator. Synchronous Computer Mediated Group Discussion differed from other forms of focus group discussion in that participants used personal computers rather than verbal expressions to respond to specific questions, engage in communication with other participants, and to record their thoughts. This form of focus group maintained many of the features of spoken exchanges, a cornerstone of the focus group, while capturing the advantages of online discussion.
Reduced circuit implementation of encoder and syndrome generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trager, Barry M; Winograd, Shmuel
An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less
Probabilistic Modeling and Visualization of the Flexibility in Morphable Models
NASA Astrophysics Data System (ADS)
Lüthi, M.; Albrecht, T.; Vetter, T.
Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.
Investigations into the shape-preserving interpolants using symbolic computation
NASA Technical Reports Server (NTRS)
Lam, Maria
1988-01-01
Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.
COMSAC: Computational Methods for Stability and Control. Part 2
NASA Technical Reports Server (NTRS)
Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)
2004-01-01
The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.
A Bitslice Implementation of Anderson's Attack on A5/1
NASA Astrophysics Data System (ADS)
Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan
2018-03-01
The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.
High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn
2014-11-14
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbormore » points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.« less
Computed tomography (CT) as a nondestructive test method used for composite helicopter components
NASA Astrophysics Data System (ADS)
Oster, Reinhold
1991-09-01
The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK 117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g., the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.
Computed Tomography (CT) as a nondestructive test method used for composite helicopter components
NASA Astrophysics Data System (ADS)
Oster, Reinhold
The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g. the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.
Suggested Approaches to the Measurement of Computer Anxiety.
ERIC Educational Resources Information Center
Toris, Carol
Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…
NASA Astrophysics Data System (ADS)
Bez'iazychnyi, V. F.
The paper is concerned with the problem of optimizing the machining of aircraft engine parts in order to satisfy certain requirements for tool wear, machining precision and surface layer characteristics, and hardening depth. A generalized multiple-objective function and its computer implementation are developed which make it possible to optimize the machining process without the use of experimental data. Alternative methods of controlling the machining process are discussed.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2016-10-15
The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
A Validation Summary of the NCC Turbulent Reacting/non-reacting Spray Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, N.-S. (Technical Monitor)
2000-01-01
This pper provides a validation summary of the spray computations performed as a part of the NCC (National Combustion Code) development activity. NCC is being developed with the aim of advancing the current prediction tools used in the design of advanced technology combustors based on the multidimensional computational methods. The solution procedure combines the novelty of the application of the scalar Monte Carlo PDF (Probability Density Function) method to the modeling of turbulent spray flames with the ability to perform the computations on unstructured grids with parallel computing. The calculation procedure was applied to predict the flow properties of three different spray cases. One is a nonswirling unconfined reacting spray, the second is a nonswirling unconfined nonreacting spray, and the third is a confined swirl-stabilized spray flame. The comparisons involving both gas-phase and droplet velocities, droplet size distributions, and gas-phase temperatures show reasonable agreement with the available experimental data. The comparisons involve both the results obtained from the use of the Monte Carlo PDF method as well as those obtained from the conventional computational fluid dynamics (CFD) solution. Detailed comparisons in the case of a reacting nonswirling spray clearly highlight the importance of chemistry/turbulence interactions in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that most of the combustion occurs in a predominantly diffusion-flame environment. However, the non-PDF solution predicts incorrectly that the combustion occurs in a predominantly vaporization-controlled regime. The Monte Carlo temperature distribution shows that the functional form of the PDF for the temperature fluctuations varies substantially from point to point. The results also bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations.
Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru
2006-04-01
The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.
Arbitrary order 2D virtual elements for polygonal meshes: part II, inelastic problem
NASA Astrophysics Data System (ADS)
Artioli, E.; Beirão da Veiga, L.; Lovadina, C.; Sacco, E.
2017-10-01
The present paper is the second part of a twofold work, whose first part is reported in Artioli et al. (Comput Mech, 2017. doi: 10.1007/s00466-017-1404-5), concerning a newly developed Virtual element method (VEM) for 2D continuum problems. The first part of the work proposed a study for linear elastic problem. The aim of this part is to explore the features of the VEM formulation when material nonlinearity is considered, showing that the accuracy and easiness of implementation discovered in the analysis inherent to the first part of the work are still retained. Three different nonlinear constitutive laws are considered in the VEM formulation. In particular, the generalized viscoelastic model, the classical Mises plasticity with isotropic/kinematic hardening and a shape memory alloy constitutive law are implemented. The versatility with respect to all the considered nonlinear material constitutive laws is demonstrated through several numerical examples, also remarking that the proposed 2D VEM formulation can be straightforwardly implemented as in a standard nonlinear structural finite element method framework.
Natural flow and water consumption in the Milk River basin, Montana and Alberta, Canada
Thompson, R.E.
1986-01-01
A study was conducted to determine the differences between natural and nonnatural Milk River streamflow, to delineate and quantify the types and effects of water consumption on streamflow, and to refine the current computation procedure into one which computes and apportions natural flow. Water consumption consists principally of irrigated agriculture, municipal use, and evapotranspiration. Mean daily water consumption by irrigation ranged from 10 cu ft/sec to 26 cu ft/sec in the Canada part and from 6 cu ft/sec to 41 cu ft/sec in the US part. Two Canadian municipalities consume about 320 acre-ft and one US municipality consumes about 20 acre-ft yearly. Evaporation from the water surface comprises 80% 0 90% of the flow reduction in the Milk River attributed to total evapotranspiration. The current water-budget approach for computing natural flow of the Milk River where it reenters the US was refined into an interim procedure which includes allowances for man-induced consumption and a method for apportioning computed natural flow between the US and Canada. The refined procedure is considered interim because further study of flow routing, tributary inflow, and man-induced consumption is needed before a more accurate procedure for computing natural flow can be developed. (Author 's abstract)
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How are my retirement benefits computed... Provisions § 839.1101 How are my retirement benefits computed if I elect CSRS or CSRS Offset under this part? Unless otherwise stated in this part, your retirement benefit is computed as if you were properly put in...
Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2015-01-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Extraction of gravitational waves in numerical relativity.
Bishop, Nigel T; Rezzolla, Luciano
2016-01-01
A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flathers, M.B.; Bache, G.E.
1999-10-01
Radial loads and direction of a centrifugal gas compressor containing a high specific speed mixed flow impeller and a single tongue volute were determined both experimentally and computationally at both design and off-design conditions. The experimental methodology was developed in conjunction with a traditional ASME PTC-10 closed-loop test to determine radial load and direction. The experimental study is detailed in Part 1 of this paper (Moore and Flathers, 1998). The computational method employs a commercially available, fully three-dimensional viscous code to analyze the impeller and the volute interaction. An uncoupled scheme was initially used where the impeller and volute weremore » analyzed as separate models using a common vaneless diffuser geometry. The two calculations were then repeated until the boundary conditions at a chosen location in the common vaneless diffuser were nearly the same. Subsequently, a coupled scheme was used where the entire stage geometry was analyzed in one calculation, thus eliminating the need for manual iteration of the two independent calculations. In addition to radial load and direction information, this computational procedure also provided aerodynamic stage performance. The effect of impeller front face and rear face cavities was also quantified. The paper will discuss computational procedures, including grid generation and boundary conditions, as well as comparisons of the various computational schemes to experiment. The results of this study will show the limitations and benefits of Computational Fluid Dynamics (CFD) for determination of radial load, direction, and aerodynamic stage performance.« less
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
An open source software for fast grid-based data-mining in spatial epidemiology (FGBASE).
Baker, David M; Valleron, Alain-Jacques
2014-10-30
Examining whether disease cases are clustered in space is an important part of epidemiological research. Another important part of spatial epidemiology is testing whether patients suffering from a disease are more, or less, exposed to environmental factors of interest than adequately defined controls. Both approaches involve determining the number of cases and controls (or population at risk) in specific zones. For cluster searches, this often must be done for millions of different zones. Doing this by calculating distances can lead to very lengthy computations. In this work we discuss the computational advantages of geographical grid-based methods, and introduce an open source software (FGBASE) which we have created for this purpose. Geographical grids based on the Lambert Azimuthal Equal Area projection are well suited for spatial epidemiology because they preserve area: each cell of the grid has the same area. We describe how data is projected onto such a grid, as well as grid-based algorithms for spatial epidemiological data-mining. The software program (FGBASE), that we have developed, implements these grid-based methods. The grid based algorithms perform extremely fast. This is particularly the case for cluster searches. When applied to a cohort of French Type 1 Diabetes (T1D) patients, as an example, the grid based algorithms detected potential clusters in a few seconds on a modern laptop. This compares very favorably to an equivalent cluster search using distance calculations instead of a grid, which took over 4 hours on the same computer. In the case study we discovered 4 potential clusters of T1D cases near the cities of Le Havre, Dunkerque, Toulouse and Nantes. One example of environmental analysis with our software was to study whether a significant association could be found between distance to vineyards with heavy pesticide. None was found. In both examples, the software facilitates the rapid testing of hypotheses. Grid-based algorithms for mining spatial epidemiological data provide advantages in terms of computational complexity thus improving the speed of computations. We believe that these methods and this software tool (FGBASE) will lower the computational barriers to entry for those performing epidemiological research.
Design and Analysis of a Subcritical Airfoil for High Altitude, Long Endurance Missions.
1982-12-01
Airfoil Design and Analysis Method ......... .... 61 Appendix D: Boundary Layer Analysis Method ............. ... 81 Appendix E: Detailed Results ofr...attack. Computer codes designed by Richard Eppler were used for this study. The airfoil was anlayzed by using a viscous effects analysis program...inverse program designed by Eppler (Ref 5) was used in this study to accomplish this part. The second step involved the analysis of the airfoil under
40 CFR Appendix C to Part 66 - Computer Program
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...
40 CFR Appendix C to Part 66 - Computer Program
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...
40 CFR Appendix C to Part 67 - Computer Program
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...
40 CFR Appendix C to Part 66 - Computer Program
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...
40 CFR Appendix C to Part 67 - Computer Program
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...
40 CFR Appendix C to Part 67 - Computer Program
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...
40 CFR Appendix C to Part 66 - Computer Program
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...
40 CFR Appendix C to Part 67 - Computer Program
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.
Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
DeBonis, James R.; Oberkampf, William L.; Wolf, Richard T.; Orkwis, Paul D.; Turner, Mark G.; Babinsky, Holger
2011-01-01
A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.
Analysis of dynamics and fit of diving suits
NASA Astrophysics Data System (ADS)
Mahnic Naglic, M.; Petrak, S.; Gersak, J.; Rolich, T.
2017-10-01
Paper presents research on dynamical behaviour and fit analysis of customised diving suits. Diving suits models are developed using the 3D flattening method, which enables the construction of a garment model directly on the 3D computer body model and separation of discrete 3D surfaces as well as transformation into 2D cutting parts. 3D body scanning of male and female test subjects was performed with the purpose of body measurements analysis in static and dynamic postures and processed body models were used for construction and simulation of diving suits prototypes. All necessary parameters, for 3D simulation were applied on obtained cutting parts, as well as parameters values for mechanical properties of neoprene material. Developed computer diving suits prototypes were used for stretch analysis on areas relevant for body dimensional changes according to dynamic anthropometrics. Garment pressures against the body in static and dynamic conditions was also analysed. Garments patterns for which the computer prototype verification was conducted were used for real prototype production. Real prototypes were also used for stretch and pressure analysis in static and dynamic conditions. Based on the obtained results, correlation analysis between body changes in dynamic positions and dynamic stress, determined on computer and real prototypes, was performed.
Slicing Method for curved façade and window extraction from point clouds
NASA Astrophysics Data System (ADS)
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
2003-11-01
Lafayette, IN 47907. [Lane et al-97b] T. Lane and C . E. Brodley. Sequence matching and learning in anomaly detection for computer security. Proceedings of...Mining, pp 259-263. 1998. [Lane et al-98b] T. Lane and C . E. Brodley. Temporal sequence learning and data reduction for anomaly detection ...W. Lee, C . Park, and S. Stolfo. Towards Automatic Intrusion Detection using NFR. 1st USENIX Workshop on Intrusion Detection and Network Monitoring
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Routing performance analysis and optimization within a massively parallel computer
Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen
2013-04-16
An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.
Spectrum orbit utilization program technical manual SOUP5 Version 3.8
NASA Technical Reports Server (NTRS)
Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.
1984-01-01
The underlying engineering and mathematical models as well as the computational methods used by the SOUP5 analysis programs, which are part of the R2BCSAT-83 Broadcast Satellite Computational System, are described. Included are the algorithms used to calculate the technical parameters and references to the relevant technical literature. The system provides the following capabilities: requirements file maintenance, data base maintenance, elliptical satellite beam fitting to service areas, plan synthesis from specified requirements, plan analysis, and report generation/query. Each of these functions are briefly described.
NASA Technical Reports Server (NTRS)
Fertis, D. G.; Simon, A. L.
1981-01-01
The requisite methodology to solve linear and nonlinear problems associated with the static and dynamic analysis of rotating machinery, their static and dynamic behavior, and the interaction between the rotating and nonrotating parts of an engine is developed. Linear and nonlinear structural engine problems are investigated by developing solution strategies and interactive computational methods whereby the man and computer can communicate directly in making analysis decisions. Representative examples include modifying structural models, changing material, parameters, selecting analysis options and coupling with interactive graphical display for pre- and postprocessing capability.
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1995-01-01
A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
An Algorithm for Computing Matrix Square Roots with Application to Riccati Equation Implementation,
1977-01-01
pansion is compared to Euclid’s method. The apriori by Aerospace Medical Research Laboratory, Aero— upper and lower bounds are also calculated. The third ... space Medical Division , Air Force Systems Command , part of this paper extends the scalar square root al— Wright—Patterson Air Force Base, Ohio 45433
Web Pages: An Effective Method of Providing CAI Resource Material in Histology.
ERIC Educational Resources Information Center
McLean, Michelle
2001-01-01
Presents research that introduces computer-aided instruction (CAI) resource material as an integral part of the second-year histology course at the University of Natal Medical School. Describes the ease with which this software can be developed, using limited resources and available skills, while providing students with valuable learning…
A System for English Vocabulary Acquisition Based on Code-Switching
ERIC Educational Resources Information Center
Mazur, Michal; Karolczak, Krzysztof; Rzepka, Rafal; Araki, Kenji
2016-01-01
Vocabulary plays an important part in second language learning and there are many existing techniques to facilitate word acquisition. One of these methods is code-switching, or mixing the vocabulary of two languages in one sentence. In this paper the authors propose an experimental system for computer-assisted English vocabulary learning in…
A WebGIS-Based Teaching Assistant System for Geography Field Practice (TASGFP)
ERIC Educational Resources Information Center
Wang, Jiechen; Ni, Haochen; Rui, Yikang; Cui, Can; Cheng, Liang
2016-01-01
Field practice is an important part of training geography research talents. However, traditional teaching methods may not adequately manage, share and implement instruction resources and thus may limit the instructor's ability to conduct field instruction. A possible answer is found in the rapid development of computer-assisted instruction (CAI),…
ERIC Educational Resources Information Center
Milet, Lynn K.; Harvey, Francis A.
Hypermedia and object oriented programming systems (OOPs) represent examples of "open" computer environments that allow the user access to parts of the code or operating system. Both systems share fundamental intellectual concepts (objects, messages, methods, classes, and inheritance), so that an understanding of hypermedia can help in…
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
Identification of internal defects of hardfacing coatings in regeneration of machine parts
NASA Astrophysics Data System (ADS)
Józwik, Jerzy; Dziedzic, Krzysztof; Pashechko, Mykhalo; Łukasiewicz, Andrzej
2017-10-01
The quality control of hardfacing is one of the areas where non-destructive testing is applied. To detect defects and inconsistencies in the industrial practice one uses the same methods as in the testing of welded joints. Computed Tomography is a type of X-ray spectroscopy. It is used as a diagnostic method that allows to obtain layered images of examined hardfacing. The paper presents the use of Computed Tomography for the evaluation of defects of hardfacing parts and errors. Padding welds were produced using GMA consumable electrode welding with CO2 active gas. The padding material used were cored wires FILTUB DUR 16, and ones produced from a Fe-Mn-C-Si-Cr-Mo-Ti-W alloy. The layers were padded on to different surfaces: C45, 165CrV12, 42CrMo4, S235JR steel. Typical defects occurring in the pads and the influence of the type of wire on the concentration of defects were characterized. The resulting pads were characterized by occurring inconsistencies taking the form of pores, intrusions and fractures.
NASA Astrophysics Data System (ADS)
Wan, Tian
This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.
Growth Control and Disease Mechanisms in Computational Embryogeny
NASA Technical Reports Server (NTRS)
Shapiro, Andrew A.; Yogev, Or; Antonsson, Erik K.
2008-01-01
This paper presents novel approach to applying growth control and diseases mechanisms in computational embryogeny. Our method, which mimics fundamental processes from biology, enables individuals to reach maturity in a controlled process through a stochastic environment. Three different mechanisms were implemented; disease mechanisms, gene suppression, and thermodynamic balancing. This approach was integrated as part of a structural evolutionary model. The model evolved continuum 3-D structures which support an external load. By using these mechanisms we were able to evolve individuals that reached a fixed size limit through the growth process. The growth process was an integral part of the complete development process. The size of the individuals was determined purely by the evolutionary process where different individuals matured to different sizes. Individuals which evolved with these characteristics have been found to be very robust for supporting a wide range of external loads.
NASA Technical Reports Server (NTRS)
2000-01-01
The molecule modeling method known as Multibody Order (N) Dynamics, or MBO(N)D, was developed by Moldyn, Inc. at Goddard Space Flight Center through funding provided by the SBIR program. The software can model the dynamics of molecules through technology which stimulates low-frequency molecular motions and properties, such as movements among a molecule's constituent parts. With MBO(N)D, a molecule is substructured into a set of interconnected rigid and flexible bodies. These bodies replace the computation burden of mapping individual atoms. Moldyn's technology cuts computation time while increasing accuracy. The MBO(N)D technology is available as Insight II 97.0 from Molecular Simulations, Inc. Currently the technology is used to account for forces on spacecraft parts and to perform molecular analyses for pharmaceutical purposes. It permits the solution of molecular dynamics problems on a moderate workstation, as opposed to on a supercomputer.
Computational wave dynamics for innovative design of coastal structures
GOTOH, Hitoshi; OKAYASU, Akio
2017-01-01
For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506
Cooley, R.L.; Hill, M.C.
1992-01-01
Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
39 CFR Appendix A to Part 265 - Fees for Computer Searches
Code of Federal Regulations, 2013 CFR
2013-07-01
... 39 Postal Service 1 2013-07-01 2013-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...
39 CFR Appendix A to Part 265 - Fees for Computer Searches
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...
39 CFR Appendix A to Part 265 - Fees for Computer Searches
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...
39 CFR Appendix A to Part 265 - Fees for Computer Searches
Code of Federal Regulations, 2014 CFR
2014-07-01
... 39 Postal Service 1 2014-07-01 2014-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...
39 CFR Appendix A to Part 265 - Fees for Computer Searches
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...
10 CFR Appendix II to Part 504 - Fuel Price Computation
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Fuel Price Computation II Appendix II to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. II Appendix II to Part 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters...
40 CFR Appendix C to Part 67 - Computer Program
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part... APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note: EPA will make copies of appendix C available from: Director, Stationary Source Compliance Division, EN...
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud
2008-12-01
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5thmore » generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media modeling and magneto hydrodynamics.« less
NASA Astrophysics Data System (ADS)
Wu, Hsin-Hung; Tsai, Ya-Ning
2012-11-01
This study uses both analytic hierarchy process (AHP) and decision-making trial and evaluation laboratory (DEMATEL) methods to evaluate the criteria in auto spare parts industry in Taiwan. Traditionally, AHP does not consider indirect effects for each criterion and assumes that criteria are independent without further addressing the interdependence between or among the criteria. Thus, the importance computed by AHP can be viewed as short-term improvement opportunity. On the contrary, DEMATEL method not only evaluates the importance of criteria but also depicts the causal relations of criteria. By observing the causal diagrams, the improvement based on cause-oriented criteria might improve the performance effectively and efficiently for the long-term perspective. As a result, the major advantage of integrating AHP and DEMATEL methods is that the decision maker can continuously improve suppliers' performance from both short-term and long-term viewpoints.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
An investigation into NVC characteristics of vehicle behaviour using modal analysis
NASA Astrophysics Data System (ADS)
Hanouf, Zahir; Faris, Waleed F.; Ahmad, Kartini
2017-03-01
NVC characterizations of vehicle behavior is one essential part of the development targets in automotive industries. Therefore understanding dynamic behavior of each structural part of the vehicle is a major requirement in improving the NVC characteristics of a vehicle. The main focus of this research is to investigate structural dynamic behavior of a passenger car using modal analysis part by part technique and apply this method to derive the interior noise sources. In the first part of this work computational modal analysis part by part tests were carried out to identify the dynamic parameters of the passenger car. Finite elements models of the different parts of the car are constructed using VPG 3.2 software. Ls-Dyna pre and post processing was used to identify and analyze the dynamic behavior of each car components panels. These tests had successfully produced natural frequencies and their associated mode shapes of such panels like trunk, hood, roof and door panels. In the second part of this research, experimental modal analysis part by part is performed on the selected car panels to extract modal parameters namely frequencies and mode shapes. The study establishes the step-by-step procedures to carry out experimental modal analysis on the car structures, using single input excitation and multi-output responses (SIMO) technique. To ensure the validity of the results obtained by the previous method an inverse method was done by fixing the response and moving the excitation and the results found were absolutely the same. Finally, comparison between results obtained from both analyses showed good similarity in both frequencies and mode shapes. Conclusion drawn from this part of study was that modal analysis part-by-part can be strongly used to establish the dynamic characteristics of the whole car. Furthermore, the developed method is also can be used to show the relationship between structural vibration of the car panels and the passengers’ noise comfort inside the cabin.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)
NASA Astrophysics Data System (ADS)
Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.
2011-04-01
A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...
2013-04-01
Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less
Wang, Shijun; McKenna, Matthew T; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M
2012-05-01
In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.
Computation of breast ptosis from 3D surface scans of the female torso
Li, Danni; Cheong, Audrey; Reece, Gregory P.; Crosby, Melissa A.; Fingeret, Michelle C.; Merchant, Fatima A.
2016-01-01
Stereophotography is now finding a niche in clinical breast surgery, and several methods for quantitatively measuring breast morphology from 3D surface images have been developed. Breast ptosis (sagging of the breast), which refers to the extent by which the nipple is lower than the inframammary fold (the contour along which the inferior part of the breast attaches to the chest wall), is an important morphological parameter that is frequently used for assessing the outcome of breast surgery. This study presents a novel algorithm that utilizes three-dimensional (3D) features such as surface curvature and orientation for the assessment of breast ptosis from 3D scans of the female torso. The performance of the computational approach proposed was compared against the consensus of manual ptosis ratings by nine plastic surgeons, and that of current 2D photogrammetric methods. Compared to the 2D methods, the average accuracy for 3D features was ~13% higher, with an increase in precision, recall, and F-score of 37%, 29%, and 33%, respectively. The computational approach proposed provides an improved and unbiased objective method for rating ptosis when compared to qualitative visualization by observers, and distance based 2D photogrammetry approaches. PMID:27643463
Wang, Shijun; McKenna, Matthew T.; Nguyen, Tan B.; Burns, Joseph E.; Petrick, Nicholas; Sahiner, Berkman
2012-01-01
In this paper we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods. PMID:22552333
NASA Astrophysics Data System (ADS)
Benković, T.; Kenđel, A.; Parlov-Vuković, J.; Kontrec, D.; Chiş, V.; Miljanić, S.; Galić, N.
2018-02-01
Structural analyses of aroylhydrazones were performed by computational and spectroscopic methods (solid state NMR, 1 and 2D NMR spectroscopy, FT-IR (ATR) spectroscopy, Raman spectroscopy, UV-Vis spectrometry and spectrofluorimetry) in solid state and in solution. The studied compounds were N‧-(2,3-dihydroxyphenylmethylidene)-3-pyridinecarbohydrazide (1), N‧-(2,5-dihydroxyphenylmethylidene)-3-pyridinecarbohydrazide (2), N‧-(3-chloro-2-hydroxy-phenylmethylidene)-3-pyridinecarbohydrazide (3), and N‧-(2-hydroxy-4-methoxyphenyl-methylidene)-3-pyridinecarbohydrazide (4). Both in solid state and in solution, all compounds were in ketoamine form (form I, sbnd COsbnd NHsbnd Ndbnd Csbnd), stabilized by intramolecular H-bond between hydroxyl proton and nitrogen atom of the Cdbnd N group. In solid state, the Cdbnd O group of 1-4 were involved in additional intermolecular H-bond between closely packed molecules. Among hydrazones studied, the chloro- and methoxy-derivatives have shown pH dependent and reversible fluorescence emission connected to deprotonation/protonation of salicylidene part of the molecules. All findings acquired by experimental methods (NMR, IR, Raman, and UV-Vis spectra) were in excellent agreement with those obtained by computational methods.
Cvitaš, Marko T; Althorpe, Stuart C
2011-01-14
We extend to full dimensionality a recently developed wave packet method [M. T. Cvitaš and S. C. Althorpe, J. Phys. Chem. A 113, 4557 (2009)] for computing the state-to-state quantum dynamics of AB + CD → ABC + D reactions and also increase the computational efficiency of the method. This is done by introducing a new set of product coordinates, by applying the Crank-Nicholson approximation to the angular kinetic energy part of the split-operator propagator and by using a symmetry-adapted basis-to-grid transformation to evaluate integrals over the potential energy surface. The newly extended method is tested on the benchmark OH + H(2) → H(2)O + H reaction, where it allows us to obtain accurately converged state-to-state reaction probabilities (on the Wu-Schatz-Fang-Lendvay-Harding potential energy surface) with modest computational effort. These methodological advances will make possible efficient calculations of state-to-state differential cross sections on this system in the near future.
Sword, Charles K.
2000-01-01
The present invention relates to an ultrasonic scanner system and method for the imaging of a part system, the scanner comprising: a probe assembly spaced apart from the surface of the part including at least two tracking signals for emitting radiation and a transmitter for emitting ultrasonic waves onto a surface in order to induce at least a portion of the waves to be reflected from the part, at least one detector for receiving the radiation wherein the detector is positioned to receive the radiation from the tracking signals, an analyzer for recognizing a three-dimensional location of the tracking signals based on the emitted radiation, a differential converter for generating an output signal representative of the waveform of the reflected waves, and a device such as a computer for relating said tracking signal location with the output signal and projecting an image of the resulting data. The scanner and method are particularly useful to acquire ultrasonic inspection data by scanning the probe over a complex part surface in an arbitrary scanning pattern.
Segmentation algorithm of colon based on multi-slice CT colonography
NASA Astrophysics Data System (ADS)
Hu, Yizhong; Ahamed, Mohammed Shabbir; Takahashi, Eiji; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Suzuki, Masahiro; Iinuma, Gen; Moriyama, Noriyuki
2012-02-01
CT colonography is a radiology test that looks at people's large intestines(colon). CT colonography can screen many options of colon cancer. This test is used to detect polyps or cancers of the colon. CT colonography is safe and reliable. It can be used if people are too sick to undergo other forms of colon cancer screening. In our research, we proposed a method for automatic segmentation of the colon from abdominal computed Tomography (CT) images. Our multistage detection method extracted colon and spited colon into different parts according to the colon anatomy information. We found that among the five segmented parts of the colon, sigmoid (20%) and rectum (50%) are more sensitive toward polyps and masses than the other three parts. Our research focused on detecting the colon by the individual diagnosis of sigmoid and rectum. We think it would make the rapid and easy diagnosis of colon in its earlier stage and help doctors for analysis of correct position of each part and detect the colon rectal cancer much easier.
Methodology and Estimates of Scour at Selected Bridge Sites in Alaska
Heinrichs, Thomas A.; Kennedy, Ben W.; Langley, Dustin E.; Burrows, Robert L.
2001-01-01
The U.S. Geological Survey estimated scour depths at 325 bridges in Alaska as part of a cooperative agreement with the Alaska Department of Transportation and Public Facilities. The department selected these sites from approximately 806 State-owned bridges as potentially susceptible to scour during extreme floods. Pier scour and contraction scour were computed for the selected bridges by using methods recommended by the Federal Highway Administration. The U.S. Geological Survey used a four-step procedure to estimate scour: (1) Compute magnitudes of the 100- and 500-year floods. (2) Determine cross-section geometry and hydraulic properties for each bridge site. (3) Compute the water-surface profile for the 100- and 500-year floods. (4) Compute contraction and pier scour. This procedure is unique because the cross sections were developed from existing data on file to make a quantitative estimate of scour. This screening method has the advantage of providing scour depths and bed elevations for comparison with bridge-foundation elevations without the time and expense of a field survey. Four examples of bridge-scour analyses are summarized in the appendix.
An efficient and robust method for predicting helicopter rotor high-speed impulsive noise
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
A new formulation for the Ffowcs Williams-Hawkings quadrupole source, which is valid for a far-field in-plane observer, is presented. The far-field approximation is new and unique in that no further approximation of the quadrupole source strength is made and integrands with r(exp -2) and r(exp -3) dependence are retained. This paper focuses on the development of a retarded-time formulation in which time derivatives are analytically taken inside the integrals to avoid unnecessary computational work when the observer moves with the rotor. The new quadrupole formulation is similar to Farassat's thickness and loading formulation 1A. Quadrupole noise prediction is carried out in two parts: a preprocessing stage in which the previously computed flow field is integrated in the direction normal to the rotor disk, and a noise computation stage in which quadrupole surface integrals are evaluated for a particular observer position. Preliminary predictions for hover and forward flight agree well with experimental data. The method is robust and requires computer resources comparable to thickness and loading noise prediction.
Robust Duplication with Comparison Methods in Microcontrollers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.
Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less
Robust Duplication with Comparison Methods in Microcontrollers
Quinn, Heather Marie; Baker, Zachary Kent; Fairbanks, Thomas D.; ...
2016-01-01
Commercial microprocessors could be useful computational platforms in space systems, as long as the risk is bound. Many spacecraft are computationally constrained because all of the computation is done on a single radiation-hardened microprocessor. It is possible that a commercial microprocessor could be used for configuration, monitoring and background tasks that are not mission critical. Most commercial microprocessors are affected by radiation, including single-event effects (SEEs) that could be destructive to the component or corrupt the data. Part screening can help designers avoid components with destructive failure modes, and mitigation can suppress data corruption. We have been experimenting with amore » method for masking radiation-induced faults through the software executing on the microprocessor. While triple-modular redundancy (TMR) techniques are very effective at masking faults in software, the increased amount of execution time to complete the computation is not desirable. Here in this article we present a technique for combining duplication with compare (DWC) with TMR that decreases observable errors by as much as 145 times with only a 2.35 time decrease in performance.« less
Lai, Chintu
1977-01-01
Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)
High resolution wind measurements for offshore wind energy development
NASA Technical Reports Server (NTRS)
Nghiem, Son Van (Inventor); Neumann, Gregory (Inventor)
2013-01-01
A method, apparatus, system, article of manufacture, and computer readable storage medium provide the ability to measure wind. Data at a first resolution (i.e., low resolution data) is collected by a satellite scatterometer. Thin slices of the data are determined. A collocation of the data slices are determined at each grid cell center to obtain ensembles of collocated data slices. Each ensemble of collocated data slices is decomposed into a mean part and a fluctuating part. The data is reconstructed at a second resolution from the mean part and a residue of the fluctuating part. A wind measurement is determined from the data at the second resolution using a wind model function. A description of the wind measurement is output.
ERIC Educational Resources Information Center
Paisley, William; Butler, Matilda
This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…
The Computer and Its Functions; How to Communicate with the Computer.
ERIC Educational Resources Information Center
Ward, Peggy M.
A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…
Click! 101 Computer Activities and Art Projects for Kids and Grown-Ups.
ERIC Educational Resources Information Center
Bundesen, Lynne; And Others
This book presents 101 computer activities and projects geared toward children and adults. The activities for both personal computers (PCs) and Macintosh were developed on the Windows 95 computer operating system, but they are adaptable to non-Windows personal computers as well. The book is divided into two parts. The first part provides an…
Layout optimization using the homogenization method
NASA Technical Reports Server (NTRS)
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
Engineering Design Handbook. Explosions in Air. Part One
1974-07-15
Characteristics in the 6. R. E. Shear, Detonation Properties of Calculation of Non-Steady Compressible Pentolite, BRL Rept. No. 1159, 1961. Flows, Los Alamos ...6 (June 1955). Particle-and-Force Method, Los Alamos Sci. Lab., LA 3144, September 1964. 19. H. L Brode, Point Source Explosion in Air, The Rand Corp...RM-1824-AEC, 29. F. H. Harlow and B. D. Meixner, The December 3, 1956. Particle-and-Force Computing Method in Fluid Dynamics, Los Alamos Scientific
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Today's Personal Computers: Products for Every Need--Part II.
ERIC Educational Resources Information Center
Personal Computing, 1981
1981-01-01
Looks at microcomputers manufactured by Altos Computer Systems, Cromemco, Exidy, Intelligent Systems, Intertec Data Systems, Mattel, Nippon Electronics, Northstar, Personal Micro Computers, and Sinclair. (Part I of this article, examining other computers, appeared in the May 1981 issue.) Journal availability: Hayden Publishing Company, 50 Essex…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Pariat, E.; Moraitis, K.
We study the writhe, twist, and magnetic helicity of different magnetic flux ropes, based on models of the solar coronal magnetic field structure. These include an analytical force-free Titov–Démoulin equilibrium solution, non-force-free magnetohydrodynamic simulations, and nonlinear force-free magnetic field models. The geometrical boundary of the magnetic flux rope is determined by the quasi-separatrix layer and the bottom surface, and the axis curve of the flux rope is determined by its overall orientation. The twist is computed by the Berger–Prior formula, which is suitable for arbitrary geometry and both force-free and non-force-free models. The magnetic helicity is estimated by the twistmore » multiplied by the square of the axial magnetic flux. We compare the obtained values with those derived by a finite volume helicity estimation method. We find that the magnetic helicity obtained with the twist method agrees with the helicity carried by the purely current-carrying part of the field within uncertainties for most test cases. It is also found that the current-carrying part of the model field is relatively significant at the very location of the magnetic flux rope. This qualitatively explains the agreement between the magnetic helicity computed by the twist method and the helicity contributed purely by the current-carrying magnetic field.« less
NASA Astrophysics Data System (ADS)
Milde, Ján; Morovič, Ladislav
2016-09-01
The paper investigates the influence of infill (internal structures of components) in the Fused Deposition Modeling (FDM) method on dimensional and geometrical accuracy of components. The components in this case were real models of human mandible, which were obtained by Computed Tomography (CT) mostly used in medical applications. In the production phase, the device used for manufacturing, was a 3D printer Zortrax M200 based on the FDM technology. In the second phase, the mandibles made by the printer, were digitized using optical scanning device of GOM ATOS Triple Scan II. They were subsequently evaluated in the final phase. The practical part of this article describes the procedure of jaw model modification, the production of components using a 3D printer, the procedure of digitization of printed parts by optical scanning device and the procedure of comparison. The outcome of this article is a comparative analysis of individual printed parts, containing tables with mean deviations for individual printed parts, as well as tables for groups of printed parts with the same infill parameter.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Albert, Jaroslav
2016-01-01
Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.
Structure and properties of parts produced by electron-beam additive manufacturing
NASA Astrophysics Data System (ADS)
Klimenov, Vasilii; Klopotov, Anatolii; Fedorov, Vasilii; Abzaev, Yurii; Batranin, Andrey; Kurgan, Kirill; Kairalapov, Daniyar
2017-12-01
The paper deals with the study of structure, microstructure, composition and microhardness of a tube processed by electron-beam additive manufacturing using optical and scanning electron microscopy. The structure and macrodefects of a tube made of Grade2 titanium alloy is studied using the X-ray computed tomography. The principles of layer-by-layer assembly and boundaries after powder sintering are set out in this paper. It is found that the titanium alloy has two phases. Future work will involve methods to improve properties of created parts.
Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.
LaBerge, Jeanne M; Andriole, Katherine P
2003-12-01
This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored.
Space and time resolved representation of a vacuum arc light emission
NASA Astrophysics Data System (ADS)
Georgescu, N.; Sandolache, G.; Zoita, V.
1999-04-01
An optoelectronic multichannel detection system for the study of the visible light emission of a vacuum circuit breaker arc is described. The system consists of two multiple slit collimator assemblies coupled directly to the arc discharge chamber and an electronic detection part. The light emitted by the arc is collected by the two collimator assemblies and is transmitted through optical fibres to the electronic detection part. By using a new, simple computational method two-dimensional plots of the vacuum arc light emission at different times are obtained.
Manufacturing technology methodology for propulsion system parts
NASA Astrophysics Data System (ADS)
McRae, M. M.
1992-07-01
A development history and a current status evaluation are presented for lost-wax casting of such gas turbine engine components as turbine vanes and blades. The most advanced such systems employ computer-integrated manufacturing methods for high process repeatability, reprogramming versatility, and feedback monitoring. Stereolithography-based plastic model 3D prototyping has also been incorporated for the wax part of the investment casting; it may ultimately be possible to produce the 3D prototype in wax directly, or even to create a ceramic mold directly. Nonintrusive inspections are conducted by X-radiography and neutron radiography.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.
2005-01-01
The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1985-01-01
The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rybicki, E.F.; Luiskutty, C.T.; Sutrick, J.S.
This research is part of a larger program sponsored by the United States Department of Energy with the objective of developing better methods to produce gas from low permeability formations in western gas sands. This large research program involves several universities and research centers. Each group is involved in a different area of study to answer specific questions. The hydraulic fracturing computer model has three components---a model for fracture geometry, a model for proppant transport, and a computer program that couples the two models. The fracture geometry model was developed at Oral Roberts University and the proppant transport model wasmore » developed at The University of Tulsa prior to the start of the present work. The present work is directed at enhancing the capabilities of these two models and coupling them to obtain a single model for evaluating the final fracture geometry and proppant distribution within the fracture. The report is organized into four parts. Part 1 describes the fracture geometry modeling effort accomplished at Oral Roberts University, NIPER and recently at The University of Tulsa. The proppant transport model, developed for constant height fractures at the University of Tulsa, is contained in Part 2. The coupling of the Proppant Transport Model and the model for the variable height fracture geometry constitutes Part 3 of this report. Part 4 presents a summary of accomplishments and recommendations of this study. 112 refs., 147 figs., 70 tabs.« less
NASA Astrophysics Data System (ADS)
Kuwajima, Satoru; Kikuchi, Hiroaki; Fukuda, Mitsuhiro
2006-03-01
A novel free-energy perturbation method is developed for the computation of the free energy of transferring a molecule between fluid phases. The methodology consists in drawing a free-energy profile of the target molecule moving across a binary-phase structure built in the computer. The novelty of the method lies in the difference of the definition of the free-energy profile from the common definition. As an important element of the method, the process of making a correction to the transfer free energy with respect to the cutoff of intermolecular forces is elucidated. In order to examine the performance of the method in the application to fluid-phase equilibrium properties, molecular-dynamics computations are carried out for the evaluation of gas solubility and vapor pressure of liquid n-hexane at 298.15K. The gas species treated are methane, ethane, propane, and n-butane, with the gas solubility expressed as Henry's constant. It is shown that the method works fine and calculated results are generally in good agreement with experiments. It is found that the cutoff correction is strikingly large, constituting a dominant part of the calculated transfer free energy at the cutoff of 8Å.
NASA Astrophysics Data System (ADS)
Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta
2018-03-01
The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.
A computer graphics system for visualizing spacecraft in orbit
NASA Technical Reports Server (NTRS)
Eyles, Don E.
1989-01-01
To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birmingham, D.; Kantowski, R.; Milton, K.A.
We use two methods of computing the unique logarithmically divergent part of the Casimir energy for massive scalar and spinor fields defined on even-dimensional Kaluza-Klein spaces of the form M/sup 4/ x S/sup N//sup 1/ x S/sup N//sup 2/ x xxx. Both methods (heat kernel and direct) give identical results. The first evaluates the required internal zeta function by identifying it in the asymptotic expansion of the trace of the heat kernel, and the second evaluates the zeta function directly using the Euler-Maclaurin sum formula. In Appendix C we tabulate these energies for all spaces of total internal dimension lessmore » than or equal to6. These methods are easily applied to vector and tensor fields needed in computing one-loop vacuum gravitational energies on these spaces. Stable solutions are given for internal structure S/sup 2/ x S/sup 2/.« less
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Final report for “Extreme-scale Algorithms and Solver Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2017-06-30
This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less
Joint statistics of strongly correlated neurons via dimensionality reduction
NASA Astrophysics Data System (ADS)
Deniz, Taşkın; Rotter, Stefan
2017-06-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
From atomistic interfaces to dendritic patterns
NASA Astrophysics Data System (ADS)
Galenko, P. K.; Alexandrov, D. V.
2018-01-01
Transport processes around phase interfaces, together with thermodynamic properties and kinetic phenomena, control the formation of dendritic patterns. Using the thermodynamic and kinetic data of phase interfaces obtained on the atomic scale, one can analyse the formation of a single dendrite and the growth of a dendritic ensemble. This is the result of recent progress in theoretical methods and computational algorithms calculated using powerful computer clusters. Great benefits can be attained from the development of micro-, meso- and macro-levels of analysis when investigating the dynamics of interfaces, interpreting experimental data and designing the macrostructure of samples. The review and research articles in this theme issue cover the spectrum of scales (from nano- to macro-length scales) in order to exhibit recently developing trends in the theoretical analysis and computational modelling of dendrite pattern formation. Atomistic modelling, the flow effect on interface dynamics, the transition from diffusion-limited to thermally controlled growth existing at a considerable driving force, two-phase (mushy) layer formation, the growth of eutectic dendrites, the formation of a secondary dendritic network due to coalescence, computational methods, including boundary integral and phase-field methods, and experimental tests for theoretical models-all these themes are highlighted in the present issue. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor
NASA Astrophysics Data System (ADS)
Pustovetov, M. Yu
2018-03-01
This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neil, Joshua Charles; Fisk, Michael Edward; Brugh, Alexander William
A system, apparatus, computer-readable medium, and computer-implemented method are provided for detecting anomalous behavior in a network. Historical parameters of the network are determined in order to determine normal activity levels. A plurality of paths in the network are enumerated as part of a graph representing the network, where each computing system in the network may be a node in the graph and the sequence of connections between two computing systems may be a directed edge in the graph. A statistical model is applied to the plurality of paths in the graph on a sliding window basis to detect anomalousmore » behavior. Data collected by a Unified Host Collection Agent ("UHCA") may also be used to detect anomalous behavior.« less
Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping
NASA Astrophysics Data System (ADS)
Saabni, Raid M.; El-Sana, Jihad A.
2011-01-01
A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.
Evaluation of ADINA. Part I. Theory and Programing Descriptions.
1980-06-08
Problem," Numerical and Computer Methods in Structural Mechanics, S. J. Feaves, N. Pe-rone, J. Robinson and W.C. Schnobrich, eds., Academic Press, New...connectivity array N102 ’NDM*NlJME-ITW0 YL Element nodal coordinates N103 ---- NUME IELT Element number of nodes N104 NUME IPST Stress printing flag N105 NUME
D-Move: A Mobile Communication Based Delphi for Digital Natives to Support Embedded Research
ERIC Educational Resources Information Center
Petrovic, Otto
2017-01-01
Digital Natives are raised with computers and the Internet, which are a familiar part of their daily life. To gain insights into their attitude and behavior, methods and media for empirical research face new challenges like gamification, context oriented embedded research, integration of multiple data sources, and the increased importance of…
ERIC Educational Resources Information Center
Chung, Gregory K. W. K.; Delacruz, Girlie C.; Dionne, Gary B.; Baker, Eva L.; Lee, John J.; Osmundson, Ellen
2016-01-01
This report addresses a renewed interest in individualized instruction, driven in part by advances in technology and assessment as well as a persistent desire to increase the access, efficiency, and cost effectiveness of training and education. Using computer-based instruction we delivered extremely efficient instruction targeted to low knowledge…
40 CFR 60.50Da - Compliance determination procedures and methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... paragraphs (g)(1) and (2) of this section to calculate emission rates based on electrical output to the grid... of appendix A of this part shall be used to compute the emission rate of PM. (2) For the particular... reduction from fuel pretreatment, percent; and %Rg = Percent reduction by SO2 control system, percent. (2...
Mitigating component performance variation
Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.
2018-01-09
Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, B., E-mail: badrul.alam@uniroma1.it; Veroli, A.; Benedetti, A.
2016-08-28
A structure featuring vertical directional coupling of long-range surface plasmon polaritons between strip waveguides at λ = 1.55 μm is investigated with the aim of producing efficient elements that enable optical multilayer routing for 3D photonics. We have introduced a practical computational method to calculate the interaction on the bent part. This method allows us both to assess the importance of the interaction in the bent part and to control it by a suitable choice of the fabrication parameters that helps also to restrain effects due to fabrication issues. The scheme adopted here allows to reduce the insertion losses compared with othermore » planar and multilayer devices.« less
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...
HOLEGAGE 1.0 - STRAIN GAGE HOLE DRILLING ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Hampton, R. W.
1994-01-01
There is no simple and perfect way to measure residual stresses in metal parts that have been welded or deformed to make complex structures such as pressure vessels and aircraft, yet these locked-in stresses can contribute to structural failure by fatigue and fracture. However, one proven and tested technique for determining the internal stress of a metal part is to drill a test hole while measuring the relieved strains around the hole, such as the hole-drilling strain gage method described in ASTM E 837. The program HOLEGAGE processes strain gage data and provides additional calculations of internal stress variations that are not obtained with standard E 837 analysis methods. The typical application of the technique uses a three gage rosette with a special hole-drilling fixture for drilling a hole through the center of the rosette to produce a hole with very small gage pattern eccentricity error. Another device is used to control the drilling and halt the drill at controlled depth steps. At each step, strains from all three strain gages are recorded. The influence coefficients used by HOLEGAGE to compute stresses from relieved hole strains were developed by published finite element method studies of thick plates for specific hole sizes and depths. The program uses a parabolic fit and an interpolating scheme to project the coefficients to other hole sizes and depths. Additionally, published experimental data are used to extend the coefficients to relatively thin plates. These influence coefficients are used to compute the stresses in the original part from the strain data. HOLEGAGE will compute interior planar stresses using strain data from each drilled hole depth layer. Planar stresses may be computed in three ways including: a least squares fit for a linear variation with depth, an integral method to give incremental stress data for each layer, or by a linear fit to the integral data (with some surface data points omitted) to predict surface stresses before strain gage sanding preparations introduced additional residual stresses. Options are included for estimating the effect of hole eccentricity on calculations, smoothing noise from the strain data, and inputting the program data either interactively or from a data file. HOLEGAGE was written in FORTRAN 77 for DEC VAX computers under VMS, and is transportable except for system-unique TIME and DATE system calls. The program requires 54K of main memory and was developed in 1990. The program is available on a 9-track 1600 BPI VAX BACKUP format magnetic tape (standard media) or a TK50 tape cartridge. The documentation is included on the tape. DEC VAX and VMS are trademarks of Digital Equipment Corporation.
NASA Technical Reports Server (NTRS)
Kraft, R. E.
1996-01-01
A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.
Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.
Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun
2017-07-25
This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)
NASA Astrophysics Data System (ADS)
Sinitskiy, Anton V.; Voth, Gregory A.
2018-01-01
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).
Sinitskiy, Anton V; Voth, Gregory A
2018-01-07
Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.
Computational and design methods for advanced imaging
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Adiabatic Quantum Anomaly Detection and Machine Learning
NASA Astrophysics Data System (ADS)
Pudenz, Kristen; Lidar, Daniel
2012-02-01
We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.
pyNS: an open-source framework for 0D haemodynamic modelling.
Manini, Simone; Antiga, Luca; Botti, Lorenzo; Remuzzi, Andrea
2015-06-01
A number of computational approaches have been proposed for the simulation of haemodynamics and vascular wall dynamics in complex vascular networks. Among them, 0D pulse wave propagation methods allow to efficiently model flow and pressure distributions and wall displacements throughout vascular networks at low computational costs. Although several techniques are documented in literature, the availability of open-source computational tools is still limited. We here present python Network Solver, a modular solver framework for 0D problems released under a BSD license as part of the archToolkit ( http://archtk.github.com ). As an application, we describe patient-specific models of the systemic circulation and detailed upper extremity for use in the prediction of maturation after surgical creation of vascular access for haemodialysis.
NASA Technical Reports Server (NTRS)
Coles, W. A.
1975-01-01
The CAD/CAM interactive computer graphics system was described; uses to which it has been put were shown, and current developments of the system were outlined. The system supports batch, time sharing, and fully interactive graphic processing. Engineers using the system may switch between these methods of data processing and problem solving to make the best use of the available resources. It is concluded that the introduction of on-line computing in the form of teletypes, storage tubes, and fully interactive graphics has resulted in large increases in productivity and reduced timescales in the geometric computing, numerical lofting and part programming areas, together with a greater utilization of the system in the technical departments.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Mayhew, S. C.
1973-01-01
A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio
NASA Astrophysics Data System (ADS)
Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong
2018-03-01
A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.
Panel methods: An introduction
NASA Technical Reports Server (NTRS)
Erickson, Larry L.
1990-01-01
Panel methods are numerical schemes for solving (the Prandtl-Glauert equation) for linear, inviscid, irrotational flow about aircraft flying at subsonic or supersonic speeds. The tools at the panel-method user's disposal are (1) surface panels of source-doublet-vorticity distributions that can represent nearly arbitrary geometry, and (2) extremely versatile boundary condition capabilities that can frequently be used for creative modeling. Panel-method capabilities and limitations, basic concepts common to all panel-method codes, different choices that were made in the implementation of these concepts into working computer programs, and various modeling techniques involving boundary conditions, jump properties, and trailing wakes are discussed. An approach for extending the method to nonlinear transonic flow is also presented. Three appendices supplement the main test. In appendix 1, additional detail is provided on how the basic concepts are implemented into a specific computer program (PANAIR). In appendix 2, it is shown how to evaluate analytically the fundamental surface integral that arises in the expressions for influence-coefficients, and evaluate its jump property. In appendix 3, a simple example is used to illustrate the so-called finite part of the improper integrals.
Nei, M; Tateno, Y
1981-01-01
Conducting computer simulations, Nei and Tateno (1978) have shown that Jukes and Holmquist's (1972) method of estimating the number of nucleotide substitutions tends to give an overestimate and the estimate obtained has a large variance. Holmquist and Conroy (1980) repeated some parts of our simulation and claim that the overestimation of nucleotide substitutions in our paper occurred mainly because we used selected data. Examination of Holmquist and Conroy's simulation indicates that their results are essentially the same as ours when the Jukes-Holmquist method is used, but since they used a different method of computation their estimates of nucleotide substitutions differed substantially from ours. Another problem in Holmquist and Conroy's Letter is that they confused the expected number of nucleotide substitution with the number in a sample. This confusion has resulted in a number of unnecessary arguments. They also criticized our X2 measure, but this criticism is apparently due to a misunderstanding of the assumptions of our method and a failure to use our method in the way we described. We believe that our earlier conclusions remain unchanged.