Sample records for microholographic computer generated

  1. Apparatus for generating x-ray holograms

    DOEpatents

    Rhodes, C.K.; Boyer, K.; Solem, J.C.; Haddad, W.S.

    1990-09-11

    Apparatus for x-ray microholography of living biological materials. A Fourier transform holographic configuration is described as being most suitable for the 3-dimensional recording of the physical characteristics of biological specimens. The use of a spherical scatterer as a reference and a charge-coupled device two-dimensional detector array placed in the forward direction relative to the incident x-radiation for viewing electromagnetic radiation simultaneously scattered from both the specimen and the reference scatterer permits the ready reconstruction of the details of the specimen from the fringe pattern detected by the charge-coupled device. For example, by using a nickel reference scatter at 4.5 nm, sufficient reference illumination is provided over a wide enough angle to allow similar resolution in both transverse and longitudinal directions. Both laser and synchrotron radiation sources are feasible for generating microholographs. Operation in the water window (2.4 to 4.5 nm) should provide maximum contrast for features of the specimen and spatial resolution on the order of the wavelength of x-radiation should be possible in all three dimensions, which is sufficient for the visualization of many biological features. It is anticipated that the present apparatus will find utility in other areas as well where microscopic physical details of a specimen are important. A computational procedure which enables the holographic data collected by the detector to be used to correct for misalignments introduced by inexact knowledge of the relative positions of the spherical reference scatterer and the sample under investigation has been developed. If the correction is performed prior to reconstruction, full compensation can be achieved and a faithfully reconstructed image produced. 7 figs.

  2. Apparatus for generating x-ray holograms

    DOEpatents

    Rhodes, Charles K.; Boyer, Keith; Solem, Johndale C.; Haddad, Waleed S.

    1990-01-01

    Apparatus for x-ray microholography of living biological materials. A Fourier transform holographic configuration is described as being most suitable for the 3-dimensional recording of the physical characteristics of biological specimens. The use of a spherical scatterer as a reference and a charge-coupled device two-dimensional detector array placed in the forward direction relative to the incident x-radiation for viewing electromagnetic radiation simultaneously scattered from both the specimen and the reference scatterer permits the ready reconstruction of the details of the specimen from the fringe pattern detected by the charge-coupled device. For example, by using a nickel reference scatter at 4.5 nm, sufficient reference illumination is provided over a wide enough angle to allow similar resolution in both transverse and longitudinal directions. Both laser and synchrotron radiation sources are feasible for generating microholographs. Operation in the water window (2.4 to 4.5 nm) should provide maximum contrast for features of the specimen and spatial resolution on the order of the wavelength of x-radiation should be possible in all three dimensions, which is sufficient for the visualization of many biological features. It is anticipated that the present apparatus will find utility in other areas as well where microscopic physical details of a specimen are important. A computational procedure which enables the holographic data collected by the detector to be used to correct for misalignments introduced by inexact knowledge of the relative positions of the spherical reference scatterer and the sample under investigation has been developed. If the correction is performed prior to reconstruction, full compensation can be achieved and a faithfully reconstructed image produced.

  3. 48 CFR 53.105 - Computer generation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Computer generation. 53...) CLAUSES AND FORMS FORMS General 53.105 Computer generation. (a) Agencies may computer-generate the... be computer generated by the public. Unless prohibited by agency regulations, forms prescribed by...

  4. 48 CFR 53.105 - Computer generation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Computer generation. 53...) CLAUSES AND FORMS FORMS General 53.105 Computer generation. (a) Agencies may computer-generate the... be computer generated by the public. Unless prohibited by agency regulations, forms prescribed by...

  5. Computer-Generated Feedback on Student Writing

    ERIC Educational Resources Information Center

    Ware, Paige

    2011-01-01

    A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…

  6. 48 CFR 52.253-1 - Computer Generated Forms.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...

  7. 48 CFR 52.253-1 - Computer Generated Forms.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 2 2013-10-01 2013-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...

  8. 48 CFR 52.253-1 - Computer Generated Forms.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...

  9. 27 CFR 19.634 - Computer-generated reports and transaction forms.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Computer-generated reports... Reports Filing Forms and Reports § 19.634 Computer-generated reports and transaction forms. TTB will accept computer-generated reports of operations and transaction forms made using a computer printer on...

  10. 27 CFR 19.634 - Computer-generated reports and transaction forms.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Computer-generated reports... Reports Filing Forms and Reports § 19.634 Computer-generated reports and transaction forms. TTB will accept computer-generated reports of operations and transaction forms made using a computer printer on...

  11. 27 CFR 19.634 - Computer-generated reports and transaction forms.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Computer-generated reports... Reports Filing Forms and Reports § 19.634 Computer-generated reports and transaction forms. TTB will accept computer-generated reports of operations and transaction forms made using a computer printer on...

  12. 27 CFR 19.634 - Computer-generated reports and transaction forms.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Computer-generated reports... Reports Filing Forms and Reports § 19.634 Computer-generated reports and transaction forms. TTB will accept computer-generated reports of operations and transaction forms made using a computer printer on...

  13. 48 CFR 52.253-1 - Computer Generated Forms.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...

  14. 48 CFR 52.253-1 - Computer Generated Forms.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...

  15. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...

  16. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  17. Non-harmful insertion of data mimicking computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee

    Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.

  18. Geometry definition and grid generation for a complete fighter aircraft

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1986-01-01

    Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.

  19. Geometry definition and grid generation for a complete fighter aircraft

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.

    1986-01-01

    Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.

  20. Low-Energy Truly Random Number Generation with Superparamagnetic Tunnel Junctions for Unconventional Computing

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.

    2017-11-01

    Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.

  1. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  2. 76 FR 79609 - Federal Acquisition Regulation; Clarification of Standards for Computer Generation of Forms

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... Regulation; Clarification of Standards for Computer Generation of Forms AGENCY: Department of Defense (DoD... American National Standards Institute X12, as the valid standard to use for computer-generated forms. FAR... optional forms on their computers. In addition to clarifying that FIPS 161 is no longer in use, public...

  3. A new generation in computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahn, R.E.

    1983-11-01

    Fifth generation of computers is described. The three disciplines involved in bringing such a new generation to reality are: microelectronics; artificial intelligence and, computer systems and architecture. Applications in industry, offices, aerospace, education, health care and retailing are outlined. An analysis is given of research efforts in the US, Japan, U.K., and Europe. Fifth generation programming languages are detailed.

  4. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  5. Computer Applications in Teaching and Learning.

    ERIC Educational Resources Information Center

    Halley, Fred S.; And Others

    Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…

  6. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  7. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  8. Manufacturing Magic and Computational Creativity

    PubMed Central

    Williams, Howard; McOwan, Peter W.

    2016-01-01

    This paper describes techniques in computational creativity, blending mathematical modeling and psychological insight, to generate new magic tricks. The details of an explicit computational framework capable of creating new magic tricks are summarized, and evaluated against a range of contemporary theories about what constitutes a creative system. To allow further development of the proposed system we situate this approach to the generation of magic in the wider context of other areas of application in computational creativity in performance arts. We show how approaches in these domains could be incorporated to enhance future magic generation systems, and critically review possible future applications of such magic generating computers. PMID:27375533

  9. Computer-Generated, Three-Dimensional Character Animation: A Report and Analysis.

    ERIC Educational Resources Information Center

    Kingsbury, Douglas Lee

    This master's thesis details the experience gathered in the production "Snoot and Muttly," a short character animation with 3-D computer generated images, and provides an analysis of the computer-generated 3-D character animation system capabilities. Descriptions are provided of the animation environment at the Ohio State University…

  10. Turbofan noise generation. Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-01-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  11. Turbofan noise generation. Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-07-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  12. Integrated circuit test-port architecture and method and apparatus of test-port generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teifel, John

    A method and apparatus are provided for generating RTL code for a test-port interface of an integrated circuit. In an embodiment, a test-port table is provided as input data. A computer automatically parses the test-port table into data structures and analyzes it to determine input, output, local, and output-enable port names. The computer generates address-detect and test-enable logic constructed from combinational functions. The computer generates one-hot multiplexer logic for at least some of the output ports. The one-hot multiplexer logic for each port is generated so as to enable the port to toggle between data signals and test signals. Themore » computer then completes the generation of the RTL code.« less

  13. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  14. Computer Series, 13: Bits and Pieces, 11.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1982-01-01

    Describes computer programs (with ordering information) on various topics including, among others, modeling of thermodynamics and economics of solar energy, radioactive decay simulation, stoichiometry drill/tutorial (in Spanish), computer-generated safety quiz, medical chemistry computer game, medical biochemistry question bank, generation of…

  15. Applications of computer-graphics animation for motion-perception research

    NASA Technical Reports Server (NTRS)

    Proffitt, D. R.; Kaiser, M. K.

    1986-01-01

    The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.

  16. Performing process migration with allreduce operations

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Wallenfelt, Brian Paul

    2010-12-14

    Compute nodes perform allreduce operations that swap processes at nodes. A first allreduce operation generates a first result and uses a first process from a first compute node, a second process from a second compute node, and zeros from other compute nodes. The first compute node replaces the first process with the first result. A second allreduce operation generates a second result and uses the first result from the first compute node, the second process from the second compute node, and zeros from others. The second compute node replaces the second process with the second result, which is the first process. A third allreduce operation generates a third result and uses the first result from first compute node, the second result from the second compute node, and zeros from others. The first compute node replaces the first result with the third result, which is the second process.

  17. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  18. Guidelines in preparing computer-generated plots for NASA technical reports with the LaRC graphics output system

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.

    1983-01-01

    To response to a need for improved computer-generated plots that are acceptable to the Langley publication process, the LaRC Graphics Output System has been modified to encompass the publication requirements, and a guideline has been established. This guideline deals only with the publication requirements of computer-generated plots. This report explains the capability that authors of NASA technical reports can use to obtain publication--quality computer-generated plots or the Langley publication process. The rules applied in developing this guideline and examples illustrating the rules are included.

  19. Ray tracing on the MPP

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.

  20. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...

  1. Hardware based redundant multi-threading inside a GPU for improved reliability

    DOEpatents

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  2. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  3. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  4. New Generation General Purpose Computer (GPC) compact IBM unit

    NASA Technical Reports Server (NTRS)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  5. Recursive computer architecture for VLSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  6. The Fifth Generation and Training Strategies

    ERIC Educational Resources Information Center

    Ennals, Richard

    2008-01-01

    Fifth Generation computers should not simply be regarded as an enhancement of current computer technology: the intention is that a fresh approach should be taken to computer science and to the use of computers. The argument of this paper is that the fresh approach must encompass education and training, with implications that extend far beyond the…

  7. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  8. Explorations in Space and Time: Computer-Generated Astronomy Films

    ERIC Educational Resources Information Center

    Meeks, M. L.

    1973-01-01

    Discusses the use of the computer animation technique to travel through space and time and watch models of astronomical systems in motion. Included is a list of eight computer-generated demonstration films entitled Explorations in Space and Time.'' (CC)

  9. Computer-generated vs. physician-documented history of present illness (HPI): results of a blinded comparison.

    PubMed

    Almario, Christopher V; Chey, William; Kaung, Aung; Whitman, Cynthia; Fuller, Garth; Reid, Mark; Nguyen, Ken; Bolus, Roger; Dennis, Buddy; Encarnacion, Rey; Martinez, Bibiana; Talley, Jennifer; Modi, Rushaba; Agarwal, Nikhil; Lee, Aaron; Kubomoto, Scott; Sharma, Gobind; Bolus, Sally; Chang, Lin; Spiegel, Brennan M R

    2015-01-01

    Healthcare delivery now mandates shorter visits with higher documentation requirements, undermining the patient-provider interaction. To improve clinic visit efficiency, we developed a patient-provider portal that systematically collects patient symptoms using a computer algorithm called Automated Evaluation of Gastrointestinal Symptoms (AEGIS). AEGIS also automatically "translates" the patient report into a full narrative history of present illness (HPI). We aimed to compare the quality of computer-generated vs. physician-documented HPIs. We performed a cross-sectional study with a paired sample design among individuals visiting outpatient adult gastrointestinal (GI) clinics for evaluation of active GI symptoms. Participants first underwent usual care and then subsequently completed AEGIS. Each individual thereby had both a physician-documented and a computer-generated HPI. Forty-eight blinded physicians assessed HPI quality across six domains using 5-point scales: (i) overall impression, (ii) thoroughness, (iii) usefulness, (iv) organization, (v) succinctness, and (vi) comprehensibility. We compared HPI scores within patient using a repeated measures model. Seventy-five patients had both computer-generated and physician-documented HPIs. The mean overall impression score for computer-generated HPIs was higher than physician HPIs (3.68 vs. 2.80; P<0.001), even after adjusting for physician and visit type, location, mode of transcription, and demographics. Computer-generated HPIs were also judged more complete (3.70 vs. 2.73; P<0.001), more useful (3.82 vs. 3.04; P<0.001), better organized (3.66 vs. 2.80; P<0.001), more succinct (3.55 vs. 3.17; P<0.001), and more comprehensible (3.66 vs. 2.97; P<0.001). Computer-generated HPIs were of higher overall quality, better organized, and more succinct, comprehensible, complete, and useful compared with HPIs written by physicians during usual care in GI clinics.

  10. The origins of informatics.

    PubMed Central

    Collen, M F

    1994-01-01

    This article summarizes the origins of informatics, which is based on the science, engineering, and technology of computer hardware, software, and communications. In just four decades, from the 1950s to the 1990s, computer technology has progressed from slow, first-generation vacuum tubes, through the invention of the transistor and its incorporation into microprocessor chips, and ultimately, to fast, fourth-generation very-large-scale-integrated silicon chips. Programming has undergone a parallel transformation, from cumbersome, first-generation, machine languages to efficient, fourth-generation application-oriented languages. Communication has evolved from simple copper wires to complex fiberoptic cables in computer-linked networks. The digital computer has profound implications for the development and practice of clinical medicine. PMID:7719803

  11. College and the Digital Generation: Assessing and Training Students for the Technological Demands of College by Exploring Relationships between Computer Self-Efficacy and Computer Proficiency

    ERIC Educational Resources Information Center

    Morris, Kathleen M.

    2010-01-01

    Today's college students are often labeled the "Net Generation" and assumed to be computer savvy and technological minded. Exposure to and use of technologies can increase self-efficacy regarding ability to complete desired computer tasks, but students arrive on campuses unable to pass computer proficiency exams. This is concerning because some…

  12. The Effects of Embedded Generative Learning Strategies and Collaboration on Knowledge Acquisition in a Cognitive Flexibility-Based Computer Learning Environment

    DTIC Science & Technology

    1998-08-07

    cognitive flexibility theory and generative learning theory which focus primarily on the individual student’s cognitive development , collaborative... develop "Handling Transfusion Hazards," a computer program based upon cognitive flexibility theory principles. The Program: Handling Transfusion Hazards...computer program was developed according to cognitive flexibility theory principles. A generative version was then developed by embedding

  13. Optical testing of aspheres based on photochromic computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Pariani, Giorgio; Bianco, Andrea; Bertarelli, Chiara; Spanó, Paolo; Molinari, Emilio

    2010-07-01

    Aspherical optics are widely used in modern optical telescopes and instrumentation because of their ability to reduce aberrations with a simple optical system. Testing their optical quality through null interferometry is not trivial as reference optics are not available. Computer-Generated Holograms (CGHs) are efficient devices that allow to generate a well-defined optical wavefront. We developed rewritable Computer Generated Holograms for the interferometric test of aspheres based on photochromic layers. These photochromic holograms are cost-effective and the method of production does not need any post exposure process.

  14. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    NASA Technical Reports Server (NTRS)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  15. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study.

    PubMed

    Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M

    2013-05-01

    To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  16. Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.

    NASA Astrophysics Data System (ADS)

    Feldman, Michael Robert

    Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.

  17. Design and evaluation of brushless electrical generators

    NASA Technical Reports Server (NTRS)

    Collins, F. A.; Ellis, J. N.

    1970-01-01

    Ten design manuals assembled and nine computer programs are developed for evaluation of proposed designs of brushless rotating electrical generators. Design manual package provides all information required for generator design, and computer programs permit calculation of performance of specific designs including effects of materials.

  18. Efficiency of including first-generation information in second-generation ranking and selection: results of computer simulation.

    Treesearch

    T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson

    2006-01-01

    Using computer simulation, we evaluated the impact of using first-generation information to increase selection efficiency in a second-generation breeding program. Selection efficiency was compared in terms of increase in rank correlation between estimated and true breeding values (i.e., ranking accuracy), reduction in coefficient of variation of correlation...

  19. Grid Generation for Multidisciplinary Design and Optimization of an Aerospace Vehicle: Issues and Challenges

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    The purpose of this paper is to discuss grid generation issues and to challenge the grid generation community to develop tools suitable for automated multidisciplinary analysis and design optimization of aerospace vehicles. Special attention is given to the grid generation issues of computational fluid dynamics and computational structural mechanics disciplines.

  20. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  1. A Combinatorial Geometry Computer Description of the MEP-021A Generator Set

    DTIC Science & Technology

    1979-02-01

    Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] 󈧚*7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack

  2. Computer-Generated Movies for Mission Planning

    NASA Technical Reports Server (NTRS)

    Roberts, P. H., Jr.; vanDillen, S. L.

    1973-01-01

    Computer-generated movies help the viewer to understand mission dynamics and get quantitative details. Sample movie frames demonstrate the uses and effectiveness of movies in mission planning. Tools needed for movie-making include computer programs to generate images on film and film processing to give the desired result. Planning scenes to make an effective product requires some thought and experience. Viewpoints and timing are particularly important. Lessons learned so far and problems still encountered are discussed.

  3. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  4. Computer-Generated Phase Diagrams for Binary Mixtures.

    ERIC Educational Resources Information Center

    Jolls, Kenneth R.; And Others

    1983-01-01

    Computer programs that generate projections of thermodynamic phase surfaces through computer graphics were used to produce diagrams representing properties of water and steam and the pressure-volume-temperature behavior of most of the common equations of state. The program, program options emphasizing thermodynamic features of interest, and…

  5. The Next Generation of Personal Computers.

    ERIC Educational Resources Information Center

    Crecine, John P.

    1986-01-01

    Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…

  6. NEMAR plotting computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program which generates CalComp plots of trajectory parameters is examined. The trajectory parameters are calculated and placed on a data file by the Near Earth Mission Analysis Routine computer program. The plot program accesses the data file and generates the plots as defined by inputs to the plot program. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included. Although this plot program utilizes a random access data file, a data file of the same type and formatted in 102 numbers per record could be generated by any computer program and used by this plot program.

  7. Integrated geometry and grid generation system for complex configurations

    NASA Technical Reports Server (NTRS)

    Akdag, Vedat; Wulf, Armin

    1992-01-01

    A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.

  8. Computer methods for sampling from the gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.E.; Tadikamalla, P.R.

    1978-01-01

    Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.

  9. French Plans for Fifth Generation Computer Systems.

    DTIC Science & Technology

    1984-12-07

    centrally man- French industry In electronics, compu- aged project in France that covers all ters, software, and services and to make the facets of the...Centre National of Japan’s Fifth Generation Project , the de Recherche Scientifique (CNRS) Cooper- French scientific and industrial com- ative Research...systems, man-computer The National Projects interaction, novel computer structures, The French Ministry of Research and knowledge-based computer systems

  10. Computer-Based Arithmetic Test Generation

    ERIC Educational Resources Information Center

    Trocchi, Robert F.

    1973-01-01

    The computer can be a welcome partner in the instructional process, but only if there is man-machine interaction. Man should not compromise system design because of available hardware; the computer must fit the system design for the result to represent an acceptable solution to instructional technology. The Arithmetic Test Generator system fits…

  11. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  12. Shaded-Color Picture Generation of Computer-Defined Arbitrary Shapes

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.; Hermstad, D. L.; Mccoy, D. S.; Clark, J.

    1986-01-01

    SHADE computer program generates realistic color-shaded pictures from computer-defined arbitrary shapes. Objects defined for computer representation displayed as smooth, color-shaded surfaces, including varying degrees of transparency. Results also used for presentation of computational results. By performing color mapping, SHADE colors model surface to display analysis results as pressures, stresses, and temperatures. NASA has used SHADE extensively in sign and analysis of high-performance aircraft. Industry should find applications for SHADE in computer-aided design and computer-aided manufacturing. SHADE written in VAX FORTRAN and MACRO Assembler for either interactive or batch execution.

  13. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  14. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  15. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  16. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  17. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  18. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  19. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  20. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  1. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  2. HORN-6 special-purpose clustered computing system for electroholography.

    PubMed

    Ichihashi, Yasuyuki; Nakayama, Hirotaka; Ito, Tomoyoshi; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Sugie, Takashige

    2009-08-03

    We developed the HORN-6 special-purpose computer for holography. We designed and constructed the HORN-6 board to handle an object image composed of one million points and constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, we succeeded in creating a computer-generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1,920 x 1,080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CPU.

  3. Theoretical, Experimental, and Computational Evaluation of Several Vane-Type Slow-Wave Structures

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    Several types of periodic vane slow-wave structures were fabricated. The dispersion characteristics were found by theoretical analysis, experimental testing, and computer simulation using the MAFIA code. Computer-generated characteristics agreed to approximately within 2 percent of the experimental characteristics for all structures. The theoretical characteristics, however, deviated increasingly as the width to height ratio became smaller. Interaction impedances were also computed based on the experimental and computer-generated resonance frequency shifts due to the introduction of a perturbing dielectric rod.

  4. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  5. Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.

    PubMed

    Tao Song; Linqiang Pan

    2015-06-01

    Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.

  6. Fifth Generation Computers: Their Implications for Further Education. An Occasional Paper.

    ERIC Educational Resources Information Center

    Ennals, Richard; Cotterell, Arthur

    Research to develop a fifth generation of computers is underway in several countries. These computers, which will be distinguished by the ability to provide knowledge information processing and respond to natural language commands, will have a profound impact on the labor market and hence on further education. Rather than being a separate…

  7. Children Learning from Artfully Designed, Three-Dimensional Computer Animation

    ERIC Educational Resources Information Center

    Ju, Yoomi Choi; Cifuentes, Lauren

    2002-01-01

    An artfully designed, 3-D computer-generated video story was created to demonstrate the mixing of primary colors to obtain secondary colors. Two research questions were explored in this research: Do artfully designed 3-D computer-generated video stories enhance learning or are such entertaining works a distraction from learning? And, do children…

  8. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) The random number generator shall be linked to the computer system and shall directly relay the... information shall be generated by the computer system. (2) This documentation shall be restricted to... to the computer system shall be adequately restricted (i.e., passwords are changed at least quarterly...

  9. A computer graphics based model for scattering from objects of arbitrary shapes in the optical region

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.

    1991-01-01

    A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.

  10. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  11. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  12. Generative models for clinical applications in computational psychiatry.

    PubMed

    Frässle, Stefan; Yao, Yu; Schöbi, Dario; Aponte, Eduardo A; Heinzle, Jakob; Stephan, Klaas E

    2018-05-01

    Despite the success of modern neuroimaging techniques in furthering our understanding of cognitive and pathophysiological processes, translation of these advances into clinically relevant tools has been virtually absent until now. Neuromodeling represents a powerful framework for overcoming this translational deadlock, and the development of computational models to solve clinical problems has become a major scientific goal over the last decade, as reflected by the emergence of clinically oriented neuromodeling fields like Computational Psychiatry, Computational Neurology, and Computational Psychosomatics. Generative models of brain physiology and connectivity in the human brain play a key role in this endeavor, striving for computational assays that can be applied to neuroimaging data from individual patients for differential diagnosis and treatment prediction. In this review, we focus on dynamic causal modeling (DCM) and its use for Computational Psychiatry. DCM is a widely used generative modeling framework for functional magnetic resonance imaging (fMRI) and magneto-/electroencephalography (M/EEG) data. This article reviews the basic concepts of DCM, revisits examples where it has proven valuable for addressing clinically relevant questions, and critically discusses methodological challenges and recent methodological advances. We conclude this review with a more general discussion of the promises and pitfalls of generative models in Computational Psychiatry and highlight the path that lies ahead of us. This article is categorized under: Neuroscience > Computation Neuroscience > Clinical Neuroscience. © 2018 Wiley Periodicals, Inc.

  13. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  14. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  15. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    NASA Astrophysics Data System (ADS)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  16. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  17. Computer Aided Grid Interface: An Interactive CFD Pre-Processor

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.

    1997-01-01

    NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the CAGI: Computer Aided Grid Interface system. The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and/or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.

  18. Computer Aided Grid Interface: An Interactive CFD Pre-Processor

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.

    1996-01-01

    NASA maintains an applications oriented computational fluid dynamics (CFD) efforts complementary to and in support of the aerodynamic-propulsion design and test activities. This is especially true at NASA/MSFC where the goal is to advance and optimize present and future liquid-fueled rocket engines. Numerical grid generation plays a significant role in the fluid flow simulations utilizing CFD. An overall goal of the current project was to develop a geometry-grid generation tool that will help engineers, scientists and CFD practitioners to analyze design problems involving complex geometries in a timely fashion. This goal is accomplished by developing the Computer Aided Grid Interface system (CAGI). The CAGI system is developed by integrating CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) geometric system output and / or Initial Graphics Exchange Specification (IGES) files (including all the NASA-IGES entities), geometry manipulations and generations associated with grid constructions, and robust grid generation methodologies. This report describes the development process of the CAGI system.

  19. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  20. Test Generators: Teacher's Tool or Teacher's Headache?

    ERIC Educational Resources Information Center

    Eiser, Leslie

    1988-01-01

    Discusses the advantages and disadvantages of test generation programs. Includes setting up, printing exams and "bells and whistles." Reviews eight computer packages for Apple and IBM personal computers. Compares features, costs, and usage. (CW)

  1. Computer-generated reminders and quality of pediatric HIV care in a resource-limited setting.

    PubMed

    Were, Martin C; Nyandiko, Winstone M; Huang, Kristin T L; Slaven, James E; Shen, Changyu; Tierney, William M; Vreeman, Rachel C

    2013-03-01

    To evaluate the impact of clinician-targeted computer-generated reminders on compliance with HIV care guidelines in a resource-limited setting. We conducted this randomized, controlled trial in an HIV referral clinic in Kenya caring for HIV-infected and HIV-exposed children (<14 years of age). For children randomly assigned to the intervention group, printed patient summaries containing computer-generated patient-specific reminders for overdue care recommendations were provided to the clinician at the time of the child's clinic visit. For children in the control group, clinicians received the summaries, but no computer-generated reminders. We compared differences between the intervention and control groups in completion of overdue tasks, including HIV testing, laboratory monitoring, initiating antiretroviral therapy, and making referrals. During the 5-month study period, 1611 patients (49% female, 70% HIV-infected) were eligible to receive at least 1 computer-generated reminder (ie, had an overdue clinical task). We observed a fourfold increase in the completion of overdue clinical tasks when reminders were availed to providers over the course of the study (68% intervention vs 18% control, P < .001). Orders also occurred earlier for the intervention group (77 days, SD 2.4 days) compared with the control group (104 days, SD 1.2 days) (P < .001). Response rates to reminders varied significantly by type of reminder and between clinicians. Clinician-targeted, computer-generated clinical reminders are associated with a significant increase in completion of overdue clinical tasks for HIV-infected and exposed children in a resource-limited setting.

  2. Generational affinities and discourses of difference: a case study of highly skilled information technology workers.

    PubMed

    McMullin, Julie Ann; Duerden Comeau, Tammy; Jovic, Emily

    2007-06-01

    Sociologists theorizing the concept of 'generation' have traditionally looked to birth cohorts sharing major social upheavals such as war or decolonization to explain issues of generational solidarity and identity affiliation. More recently, theorists have drawn attention to the cultural elements where generations are thought to be formed through affinities with music or other types of popular culture during the 'coming of age' stage of life. In this paper, we ask whether developments in computer technology, which have both productive and cultural components, provide a basis for generational formation and identity and whether generational discourse is invoked to create cultures of difference in the workplace. Qualitative data from a sample of Information Technology workers show that these professionals mobilize 'generational' discourse and draw upon notions of 'generational affinity' with computing technology (e.g. the fact that people of different ages were immersed to varying degrees in different computing technologies) in explaining the youthful profile of IT workers and employees' differing levels of technological expertise.

  3. The comparative effect of individually-generated vs. collaboratively-generated computer-based concept mapping on science concept learning

    NASA Astrophysics Data System (ADS)

    Kwon, So Young

    Using a quasi-experimental design, the researcher investigated the comparative effects of individually-generated and collaboratively-generated computer-based concept mapping on middle school science concept learning. Qualitative data were analyzed to explain quantitative findings. One hundred sixty-one students (74 boys and 87 girls) in eight, seventh grade science classes at a middle school in Southeast Texas completed the entire study. Using prior science performance scores to assure equivalence of student achievement across groups, the researcher assigned the teacher's classes to one of the three experimental groups. The independent variable, group, consisted of three levels: 40 students in a control group, 59 students trained to individually generate concept maps on computers, and 62 students trained to collaboratively generate concept maps on computers. The dependent variables were science concept learning as demonstrated by comprehension test scores, and quality of concept maps created by students in experimental groups as demonstrated by rubric scores. Students in the experimental groups received concept mapping training and used their newly acquired concept mapping skills to individually or collaboratively construct computer-based concept maps during study time. The control group, the individually-generated concept mapping group, and the collaboratively-generated concept mapping group had equivalent learning experiences for 50 minutes during five days, excepting that students in a control group worked independently without concept mapping activities, students in the individual group worked individually to construct concept maps, and students in the collaborative group worked collaboratively to construct concept maps during their study time. Both collaboratively and individually generated computer-based concept mapping had a positive effect on seventh grade middle school science concept learning but neither strategy was more effective than the other. However, the students who collaboratively generated concept maps created significantly higher quality concept maps than those who individually generated concept maps. The researcher concluded that the concept mapping software, Inspiration(TM), fostered construction of students' concept maps individually or collaboratively for science learning and helped students capture their evolving creative ideas and organize them for meaningful learning. Students in both the individual and the collaborative concept mapping groups had positive attitudes toward concept mapping using Inspiration(TM) software.

  4. Developing the next generation of diverse computer scientists: the need for enhanced, intersectional computing identity theory

    NASA Astrophysics Data System (ADS)

    Rodriguez, Sarah L.; Lehman, Kathleen

    2017-10-01

    This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.

  5. A study of digital holographic filter generation

    NASA Technical Reports Server (NTRS)

    Calhoun, M.; Ingels, F.

    1976-01-01

    Problems associated with digital computer generation of holograms are discussed along with a criteria for producing optimum digital holograms. This criteria revolves around amplitude resolution and spatial frequency limitations induced by the computer and plotter process.

  6. Solution of Poisson equations for 3-dimensional grid generations. [computations of a flow field over a thin delta wing

    NASA Technical Reports Server (NTRS)

    Fujii, K.

    1983-01-01

    A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.

  7. ECDSA B-233 with Precomputation 1.0 Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy; Schroeppel, Richard; Schoeneman, Barry

    2009-12-11

    This software, written in C, performs two functions: 1) the generation of digital signatures using ECDSA with the B-233 curve and a table of precomputed values, and 2) the generation and encryption of a table of precomputed values to support the generation of many digital signatures. The computationally expensive operations of ECDSA signature generation are precomputed, stored in a table, and protected with AES encryption. This allows digital signatures to be generated in low-power, computationally-constrained environments, such as are often found in non-proliferation monitoring applications. The encrypted, precomputed table and digital signature generation software are used to provide public keymore » data authentication for sensor data. When digital data is presented for signing, a set of values from the table is decrypted and used to generate an ECDSA digital signatureThis software, written in C, performs two functions: 1) the generation of digital signatures using ECDSA with the B-233 curve and a table of precomputed values, and 2) the generation and encryption of a table of precomputed values to support the generation of many digital signatures. The computationally expensive operations of ECDSA signature generation are precomputed, stored in a table, and protected with AES encryption. This allows digital signatures to be generated in low-power, computationally-constrained environments, such as are often found in non-proliferation monitoring applications. The encrypted, precomputed table and digital signature generation software are used to provide public key data authentication for sensor data. When digital data is presented for signing, a set of values from the table is decrypted and used to generate an ECDSA digital signature« less

  8. Installation of new Generation General Purpose Computer (GPC) compact unit

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  9. Generating finite cyclic and dihedral groups using sequential insertion systems with interactions

    NASA Astrophysics Data System (ADS)

    Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod; Yosman, Ahmad Firdaus

    2017-04-01

    The operation of insertion has been studied extensively throughout the years for its impact in many areas of theoretical computer science such as DNA computing. First introduced as a generalization of the concatenation operation, many variants of insertion have been introduced, each with their own computational properties. In this paper, we introduce a new variant that enables the generation of some special types of groups called sequential insertion systems with interactions. We show that these new systems are able to generate all finite cyclic and dihedral groups.

  10. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  11. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  12. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  13. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  14. Generative Representations for Computer-Automated Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2006-01-01

    With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.

  15. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  16. Fast calculation method for computer-generated cylindrical holograms.

    PubMed

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  17. Generative Representations for Computer-Automated Design Systems

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2004-01-01

    With the increasing computational power of Computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design programs is the representation with which they encode designs. If the representation cannot encode a certain design, then the design program cannot produce it. Similarly, a poor representation makes some types of designs extremely unlikely to be created. Here we define generative representations as those representations which can create and reuse organizational units within a design and argue that reuse is necessary for design systems to scale to more complex and interesting designs. To support our argument we describe GENRE, an evolutionary design program that uses both a generative and a non-generative representation, and compare the results of evolving designs with both types of representations.

  18. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  19. Effects of Static Visuals and Computer-Generated Animations in Facilitating Immediate and Delayed Achievement in the EFL Classroom

    ERIC Educational Resources Information Center

    Lin, Huifen; Chen, Tsuiping; Dwyer, Francis M.

    2006-01-01

    The purpose of this experimental study was to compare the effects of using static visuals versus computer-generated animation to enhance learners' comprehension and retention of a content-based lesson in a computer-based learning environment for learning English as a foreign language (EFL). Fifty-eight students from two EFL reading sections were…

  20. Computer program MCAP-TOSS calculates steady-state fluid dynamics of coolant in parallel channels and temperature distribution in surrounding heat-generating solid

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1967-01-01

    Computer program calculates the steady state fluid distribution, temperature rise, and pressure drop of a coolant, the material temperature distribution of a heat generating solid, and the heat flux distributions at the fluid-solid interfaces. It performs the necessary iterations automatically within the computer, in one machine run.

  1. Generation and physical characteristics of the ERTS MSS system corrected computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1973-01-01

    The generation and format are discussed of the ERTS system corrected multispectral scanner computer compatible tapes. The discussion includes spacecraft sensors, scene characteristics, data transmission, and conversion of data to computer compatible tapes at the NASA Data Processing Facility. Geometeric and radiometric corrections, tape formats, and the physical characteristics of the tapes are also included.

  2. Interactive Computation for Undergraduates: The Next Generation

    NASA Astrophysics Data System (ADS)

    Kolan, Amy J.

    2017-05-01

    A generation ago (29 years ago), Leo Kadanoff and Michael Vinson created the Computers, Chaos, and Physics course. A major pedagogical thrust of this course was to help students form and test hypotheses via computer simulation of small problems in physics. Recently, this aspect of the 1987 course has been revived for use with first year physics undergraduate students at St. Olaf College.

  3. Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.

    PubMed

    Hayashi, Masahito; Morimae, Tomoyuki

    2015-11-27

    We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.

  4. Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Morimae, Tomoyuki

    2015-11-01

    We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.

  5. Unstructured mesh generation and adaptivity

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  6. Digital computer technique for setup and checkout of an analog computer

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1968-01-01

    Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.

  7. Emerging Approach of Natural Language Processing in Opinion Mining: A Review

    NASA Astrophysics Data System (ADS)

    Kim, Tai-Hoon

    Natural language processing (NLP) is a subfield of artificial intelligence and computational linguistics. It studies the problems of automated generation and understanding of natural human languages. This paper outlines a framework to use computer and natural language techniques for various levels of learners to learn foreign languages in Computer-based Learning environment. We propose some ideas for using the computer as a practical tool for learning foreign language where the most of courseware is generated automatically. We then describe how to build Computer Based Learning tools, discuss its effectiveness, and conclude with some possibilities using on-line resources.

  8. Software on diffractive optics and computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Doskolovich, Leonid L.; Golub, Michael A.; Kazanskiy, Nikolay L.; Khramov, Alexander G.; Pavelyev, Vladimir S.; Seraphimovich, P. G.; Soifer, Victor A.; Volotovskiy, S. G.

    1995-01-01

    The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.

  9. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  10. Prosthetically directed implant placement using computer software to ensure precise placement and predictable prosthetic outcomes. Part 2: rapid-prototype medical modeling and stereolithographic drilling guides requiring bone exposure.

    PubMed

    Rosenfeld, Alan L; Mandelaris, George A; Tardieu, Philippe B

    2006-08-01

    The purpose of this paper is to expand on part 1 of this series (published in the previous issue) regarding the emerging future of computer-guided implant dentistry. This article will introduce the concept of rapid-prototype medical modeling as well as describe the utilization and fabrication of computer-generated surgical drilling guides used during implant surgery. The placement of dental implants has traditionally been an intuitive process, whereby the surgeon relies on mental navigation to achieve optimal implant positioning. Through rapid-prototype medical modeling and the ste-reolithographic process, surgical drilling guides (eg, SurgiGuide) can be created. These guides are generated from a surgical implant plan created with a computer software system that incorporates all relevant prosthetic information from which the surgical plan is developed. The utilization of computer-generated planning and stereolithographically generated surgical drilling guides embraces the concept of collaborative accountability and supersedes traditional mental navigation on all levels of implant therapy.

  11. Accretor: Generative Materiality in the Work of Driessens and Verstappen.

    PubMed

    Whitelaw, Mitchell

    2015-01-01

    Accretor, by the Dutch artists Erwin Driessens and Maria Verstappen, is a generative artwork that adopts and adapts artificial life techniques to produce intricate three-dimensional forms. This article introduces and analyzes Accretor, considering the enigmatic quality of the generated objects and in particular the role of materiality in this highly computational work. Accretor demonstrates a tangled continuity between digital and physical domains, where the constraints and affordances of matter inform both formal processes and aesthetic interpretations. Drawing on Arp's notion of the concrete artwork and McCormack and Dorin's notion of the computational sublime, the article finally argues that Accretor demonstrates what might be called a processual sublime, evoking expansive processes that span both computational and non-computational systems.

  12. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  13. Towards a theory of automated elliptic mesh generation

    NASA Technical Reports Server (NTRS)

    Cordova, J. Q.

    1992-01-01

    The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.

  14. Oscillatory threshold logic.

    PubMed

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.

  15. Inflight IFR procedures simulator

    NASA Technical Reports Server (NTRS)

    Parker, L. C. (Inventor)

    1984-01-01

    An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.

  16. Blind Quantum Signature with Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  17. 47 CFR 3.60 - Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL AUTHORIZATION AND ADMINISTRATION OF ACCOUNTING AUTHORITIES IN... be typewritten or computer generated, be annotated to indicate it is the initial inventory and be in... typewritten or computer generated and be in the following general format: Additions to Current Vessel...

  18. 47 CFR 3.60 - Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL AUTHORIZATION AND ADMINISTRATION OF ACCOUNTING AUTHORITIES IN... be typewritten or computer generated, be annotated to indicate it is the initial inventory and be in... typewritten or computer generated and be in the following general format: Additions to Current Vessel...

  19. Programmable Pulse Generator

    NASA Technical Reports Server (NTRS)

    Rhim, W. K.; Dart, J. A.

    1982-01-01

    New pulse generator programmed to produce pulses from several ports at different pulse lengths and intervals and virtually any combination and sequence. Unit contains a 256-word-by-16-bit memory loaded with instructions either manually or by computer. Once loaded, unit operates independently of computer.

  20. High-resolution computer-generated reflection holograms with three-dimensional effects written directly on a silicon surface by a femtosecond laser.

    PubMed

    Wædegaard, Kristian J; Balling, Peter

    2011-02-14

    An infrared femtosecond laser has been used to write computer-generated holograms directly on a silicon surface. The high resolution offered by short-pulse laser ablation is employed to write highly detailed holograms with resolution up to 111 kpixels/mm2. It is demonstrated how three-dimensional effects can be realized in computer-generated holograms. Three-dimensional effects are visualized as a relative motion between different parts of the holographic reconstruction, when the hologram is moved relative to the reconstructing laser beam. Potential security applications are briefly discussed.

  1. Image communication scheme based on dynamic visual cryptography and computer generated holography

    NASA Astrophysics Data System (ADS)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  2. Has computational creativity successfully made it "Beyond the Fence" in musical theatre?

    NASA Astrophysics Data System (ADS)

    Jordanous, Anna

    2017-10-01

    A significant test for software is to task it with replicating human performance, as done recently with creative software and the commercial project Beyond the Fence (undertaken for a television documentary Computer Says Show). The remit of this project was to use computer software as much as possible to produce "the world's first computer-generated musical". Several creative systems were used to generate this musical, which was performed in London's West End in 2016. This paper considers the challenge of evaluating this project. Current computational creativity evaluation methods are ill-suited to evaluating projects that involve creative input from multiple systems and people. Following recent inspiration within computational creativity research from interaction design, here the DECIDE evaluation framework is applied to evaluate the Beyond the Fence project. Evaluation finds that the project was reasonably successful at achieving the task of using computational generation to produce a credible musical. Lessons have been learned for future computational creativity projects though, particularly for affording creative software more agency and enabling software to interact with other creative partners. Upon reflection, the DECIDE framework emerges as a useful evaluation "checklist" (if not a tangible operational methodology) for evaluating multiple creative systems participating in a creative task.

  3. Inertial confinement fusion quarterly report, October--December 1992. Volume 3, No. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixit, S.N.

    1992-12-31

    This report contains papers on the following topics: The Beamlet Front End: Prototype of a new pulse generation system;imaging biological objects with x-ray lasers; coherent XUV generation via high-order harmonic generation in rare gases; theory of high-order harmonic generation; two-dimensional computer simulations of ultra- intense, short-pulse laser-plasma interactions; neutron detectors for measuring the fusion burn history of ICF targets; the recirculator; and lasnex evolves to exploit computer industry advances.

  4. Eddylicious: A Python package for turbulent inflow generation

    NASA Astrophysics Data System (ADS)

    Mukha, Timofey; Liefvendahl, Mattias

    2018-01-01

    A Python package for generating inflow for scale-resolving computer simulations of turbulent flow is presented. The purpose of the package is to unite existing inflow generation methods in a single code-base and make them accessible to users of various Computational Fluid Dynamics (CFD) solvers. The currently existing functionality consists of an accurate inflow generation method suitable for flows with a turbulent boundary layer inflow and input/output routines for coupling with the open-source CFD solver OpenFOAM.

  5. Scaffolding High School Students' Divergent Idea Generation in a Computer-Mediated Design and Technology Learning Environment

    ERIC Educational Resources Information Center

    Yeo, Tiong-Meng; Quek, Choon-Lang

    2014-01-01

    This comparative study investigates how two groups of design and technology students generated ideas in an asynchronous computer-mediated communication setting. The generated ideas were design ideas in the form of sketches. Each group comprised five students who were all 15 years of age. All the students were from the same secondary school but…

  6. GOTO Poetry.

    ERIC Educational Resources Information Center

    Kern, Alfred

    1983-01-01

    Describes an experimental course at Allegheny College in computer-generated poetry, which required students to deal simultaneously with grammar and rhetoric, poetics, the computer and BASIC, logic and artificial intelligence in order to create programs that would generate poetry. Examples of verses produced by course participants are included.…

  7. A System for Generating Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Nygard, Kendall E.; Ranganathan, Babusankar

    1983-01-01

    Description of the Tektronix-Based Interactive Graphics System for Instruction (TIGSI), which was developed for generating graphics displays in computer-assisted instruction materials, discusses several applications (e.g., reinforcing learning of concepts, principles, rules, and problem-solving techniques) and presents advantages of the TIGSI…

  8. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  9. Generation and physical characteristics of the LANDSAT-1, -2 and -3 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1977-01-01

    The generation and format of the LANDSAT 1, 2, and 3 system corrected multispectral scanner computer compatible tapes are discussed. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes. Also included in the discussion are geometric and radiometric corrections, tape formats, and the physical characteristics of the tape.

  10. Generation and physical characteristics of the Landsat 1 and 2 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1975-01-01

    The generation and format is discussed of the Landsat 1 and 2 system corrected multispectral scanner computer compatible tapes. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes at the NASA Data Processing Facility. Geometric and radiometric corrections, tape formats, and the physical characteristics of the tape are also described.

  11. A ‘reader’ unit of the chemical computer

    PubMed Central

    Smelov, Pavel S.

    2018-01-01

    We suggest the main principals and functional units of the parallel chemical computer, namely, (i) a generator (which is a network of coupled oscillators) of oscillatory dynamic modes, (ii) a unit which is able to recognize these modes (a ‘reader’) and (iii) a decision-making unit, which analyses the current mode, compares it with the external signal and sends a command to the mode generator to switch it to the other dynamical regime. Three main methods of the functioning of the reader unit are suggested and tested computationally: (a) the polychronization method, which explores the differences between the phases of the generator oscillators; (b) the amplitude method which detects clusters of the generator and (c) the resonance method which is based on the resonances between the frequencies of the generator modes and the internal frequencies of the damped oscillations of the reader cells. Pro and contra of these methods have been analysed. PMID:29410852

  12. A study of sound generation in subsonic rotors, volume 2

    NASA Technical Reports Server (NTRS)

    Chalupnik, J. D.; Clark, L. T.

    1975-01-01

    Computer programs were developed for use in the analysis of sound generation by subsonic rotors. Program AIRFOIL computes the spectrum of radiated sound from a single airfoil immersed in a laminar flow field. Program ROTOR extends this to a rotating frame, and provides a model for sound generation in subsonic rotors. The program also computes tone sound generation due to steady state forces on the blades. Program TONE uses a moving source analysis to generate a time series for an array of forces moving in a circular path. The resultant time series are than Fourier transformed to render the results in spectral form. Program SDATA is a standard time series analysis package. It reads in two discrete time series and forms auto and cross covariances and normalizes these to form correlations. The program then transforms the covariances to yield auto and cross power spectra by means of a Fourier transformation.

  13. An empirical generative framework for computational modeling of language acquisition.

    PubMed

    Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-06-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.

  14. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  15. Helical gears with circular arc teeth: Generation, geometry, precision and adjustment to errors, computer aided simulation of conditions of meshing and bearing contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Tsay, Chung-Biau

    1987-01-01

    The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.

  16. Extracting Depth From Motion Parallax in Real-World and Synthetic Displays

    NASA Technical Reports Server (NTRS)

    Hecht, Heiko; Kaiser, Mary K.; Aiken, William; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    In psychophysical studies on human sensitivity to visual motion parallax (MP), the use of computer displays is pervasive. However, a number of potential problems are associated with such displays: cue conflicts arise when observers accommodate to the screen surface, and observer head and body movements are often not reflected in the displays. We investigated observers' sensitivity to depth information in MP (slant, depth order, relative depth) using various real-world displays and their computer-generated analogs. Angle judgments of real-world stimuli were consistently superior to judgments that were based on computer-generated stimuli. Similar results were found for perceived depth order and relative depth. Perceptual competence of observers tends to be underestimated in research that is based on computer generated displays. Such findings cannot be generalized to more realistic viewing situations.

  17. Conservative zonal schemes for patched grids in 2 and 3 dimensions

    NASA Technical Reports Server (NTRS)

    Hessenius, Kristin A.

    1987-01-01

    The computation of flow over complex geometries, such as realistic aircraft configurations, poses difficult grid generation problems for computational aerodynamicists. The creation of a traditional, single-module grid of acceptable quality about an entire configuration may be impossible even with the most sophisticated of grid generation techniques. A zonal approach, wherein the flow field is partitioned into several regions within which grids are independently generated, is a practical alternative for treating complicated geometries. This technique not only alleviates the problems of discretizing a complex region, but also facilitates a block processing approach to computation thereby circumventing computer memory limitations. The use of such a zonal scheme, however, requires the development of an interfacing procedure that ensures a stable, accurate, and conservative calculation for the transfer of information across the zonal borders.

  18. Oscillatory Threshold Logic

    PubMed Central

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034

  19. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  20. Development of New Generation of Multibody System Computer Software

    DTIC Science & Technology

    2012-04-12

    DEVELOPMENT OF NEW GENERATION OF MULTIBODY SYSTEM COMPUTER SOFTWARE Ahmed A. Shabana University of Illinois at Chicago Paramsothy Jayakumar ...Paramsothy Jayakumar ; Michael Letherwood 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  1. Molecular electronics: The technology of sixth generation computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarvis, M.T.; Miller, R.K.

    1987-01-01

    In February 1986, Japan began the 6th Generation project. At the 1987 Economic Summit in Venice, Prime Minister Yashuhiro Makasone opened the project to world collaboration. A project director suggests that the 6th Generation ''may just be a turning point for human society.'' The major rationale for building molecular electronic devices is to achieve advances in computational densities and speeds. Proposed chromophore chains for molecular-scale chips, for example, could be spaced closer than today's silicone elements by a factor of almost 100. This book describes the research and proposed designs for molecular electronic devices and computers. It examines specific potentialmore » applications and the relationship to molecular electronics to silicon technology and presents the first published survey of experts on research issues, applications, and forecast of future developments and also includes market forecast. An interesting suggestion of the survey is that the chemical industry may become a significant factor in the computer industry as the sixth generation unfolds.« less

  2. CAGI: Computer Aided Grid Interface. A work in progress

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Yu, Tzu-Yi; Vaughn, David

    1992-01-01

    Progress realized in the development of a Computer Aided Grid Interface (CAGI) software system in integrating CAD/CAM geometric system output and/or Interactive Graphics Exchange Standard (IGES) files, geometry manipulations associated with grid generation, and robust grid generation methodologies is presented. CAGI is being developed in a modular fashion and will offer fast, efficient and economical response to geometry/grid preparation, allowing the ability to upgrade basic geometry in a step-by-step fashion interactively and under permanent visual control along with minimizing the differences between the actual hardware surface descriptions and corresponding numerical analog. The computer code GENIE is used as a basis. The Non-Uniform Rational B-Splines (NURBS) representation of sculptured surfaces is utilized for surface grid redistribution. The computer aided analysis system, PATRAN, is adapted as a CAD/CAM system. The progress realized in NURBS surface grid generation, the development of IGES transformer, and geometry adaption using PATRAN will be presented along with their applicability to grid generation associated with rocket propulsion applications.

  3. The Old Computers' Home.

    ERIC Educational Resources Information Center

    Angier, Natalie

    1983-01-01

    The Computer Museum in Marlborough, Massachusetts houses old and not-so-old calculators, famous old computers and parts of computers, photographs and assorted memorabilia, computer-generated murals, and even a computer made of Tinkertoys that plays tick-tack-toe. The development of the museum and selected exhibits is described. (Author/JN)

  4. User's guide to the NOZL3D and NOZLIC computer programs

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.

  5. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  6. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  7. Computer Training for Seniors: An Academic-Community Partnership

    ERIC Educational Resources Information Center

    Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha

    2013-01-01

    Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…

  8. A Call for Computational Thinking in Undergraduate Psychology

    ERIC Educational Resources Information Center

    Anderson, Nicole D.

    2016-01-01

    Computational thinking is an approach to problem solving that is typically employed by computer programmers. The advantage of this approach is that solutions can be generated through algorithms that can be implemented as computer code. Although computational thinking has historically been a skill that is exclusively taught within computer science,…

  9. Improving Undergraduates' Critique via Computer Mediated Communication

    ERIC Educational Resources Information Center

    Mohamad, Maslawati; Musa, Faridah; Amin, Maryam Mohamed; Mufti, Norlaila; Latiff, Rozmel Abdul; Sallihuddin, Nani Rahayu

    2014-01-01

    Our current university students, labeled as "Generation Y" or Millennials, are different from previous generations due to wide exposure to media. Being technologically savvy, they are accustomed to Internet for information and social media for socializing. In line with this current trend, teaching through computer mediated communication…

  10. SYNTOR: A synthetic daily weather generator version 3.4 user manual

    USDA-ARS?s Scientific Manuscript database

    Existing records of weather observations are often too short to conduct long duration hydrologic and environmental computer simulations. A computer program can be used to generate synthetic weather data to increase the length of existing weather records. SYNTOR, which stands for SYNthetic weather g...

  11. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  12. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  13. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  14. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    PubMed

    Klonoff, David C

    2017-07-01

    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  15. A Computational-Experimental Development of Vortex Generator Use for a Transitioning S-Diffuser

    NASA Technical Reports Server (NTRS)

    Wendt, Bruce J.; Dudek, Julianne C.

    1996-01-01

    The development of an effective design strategy for surface-mounted vortex generator arrays in a subsonic diffuser is described in this report. This strategy uses the strengths of both computational and experimental analyses to determine beneficial vortex generator locations and sizes. A parabolized Navier-Stokes solver, RNS3D, was used to establish proper placement of the vortex generators for reduction in circumferential total pressure distortion. Experimental measurements were used to determine proper vortex generator sizing to minimize total pressure recovery losses associated with vortex generator device drag. The best result achieved a 59% reduction in the distortion index DC60, with a 0.3% reduction in total pressure recovery.

  16. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  17. Computer generated animation and movie production at LARC: A case study

    NASA Technical Reports Server (NTRS)

    Gates, R. L.; Matthews, C. G.; Vonofenheim, W. H.; Randall, D. P.; Jones, K. H.

    1984-01-01

    The process of producing computer generated 16mm movies using the MOVIE.BYU software package developed by Brigham Young University and the currently available hardware technology at the Langley Research Center is described. A general overview relates the procedures to a specific application. Details are provided which describe the data used, preparation of a storyboard, key frame generation, the actual animation, title generation, filming, and processing/developing the final product. Problems encountered in each of these areas are identified. Both hardware and software problems are discussed along with proposed solutions and recommendations.

  18. A Computer Program for the Calculation of Three-Dimensional Transonic Nacelle/Inlet Flowfields

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Atta, E. H.

    1983-01-01

    A highly efficient computer analysis was developed for predicting transonic nacelle/inlet flowfields. This algorithm can compute the three dimensional transonic flowfield about axisymmetric (or asymmetric) nacelle/inlet configurations at zero or nonzero incidence. The flowfield is determined by solving the full-potential equation in conservative form on a body-fitted curvilinear computational mesh. The difference equations are solved using the AF2 approximate factorization scheme. This report presents a discussion of the computational methods used to both generate the body-fitted curvilinear mesh and to obtain the inviscid flow solution. Computed results and correlations with existing methods and experiment are presented. Also presented are discussions on the organization of the grid generation (NGRIDA) computer program and the flow solution (NACELLE) computer program, descriptions of the respective subroutines, definitions of the required input parameters for both algorithms, a brief discussion on interpretation of the output, and sample cases to illustrate application of the analysis.

  19. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...

  20. Associative Algorithms for Computational Creativity

    ERIC Educational Resources Information Center

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  1. An Examination of the Utility of Non-Linear Dynamics Techniques for Analyzing Human Information Behaviors.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Kurtze, Douglas

    1992-01-01

    Discusses the use of chaos, or nonlinear dynamics, for investigating computer-mediated communication. A comparison between real, human-generated data from a computer network and similarly constructed random-generated data is made, and mathematical procedures for determining chaos are described. (seven references) (LRW)

  2. A note on the generation of phase plane plots on a digital computer. [for solution of nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1980-01-01

    A technique is presented for generating phase plane plots on a digital computer which circumvents the difficulties associated with more traditional methods of numerical solving nonlinear differential equations. In particular, the nonlinear differential equation of operation is formulated.

  3. Enhancing Learning Outcomes in Computer-Based Training via Self-Generated Elaboration

    ERIC Educational Resources Information Center

    Cuevas, Haydee M.; Fiore, Stephen M.

    2014-01-01

    The present study investigated the utility of an instructional strategy known as the "query method" for enhancing learning outcomes in computer-based training. The query method involves an embedded guided, sentence generation task requiring elaboration of key concepts in the training material that encourages learners to "stop and…

  4. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  5. Development of a fourth generation predictive capability maturity model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, themore » PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.« less

  6. Automated CFD Database Generation for a 2nd Generation Glide-Back-Booster

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Michael J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejmil, Edward

    2003-01-01

    A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment using 13 computers located at 4 different geographical sites. Process automation and web-based access to the database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The database consists of forces, moments, and solution files obtained by varying the Mach number, angle of attack, and sideslip angle. The forces and moments compare well with experimental data. Stability derivatives are also computed using a monotone cubic spline procedure. Flow visualization and three-dimensional surface plots are used to interpret and characterize the nature of computed flow fields.

  7. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  8. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  9. On the Stefan Problem with Volumetric Energy Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Crepeau; Ali Siahpush; Blaine Spotten

    2009-11-01

    This paper presents results of solid-liquid phase change, driven by volumetric energy generation, in a vertical cylinder. We show excellent agreement between a quasi-static, approximate analytical solution valid for Stefan numbers less than one, and a computational model solved using the CFD code FLUENT®. A computational study also shows the effect that the volumetric energy generation has on both the mushy zone thickness and convection in the melt during phase change.

  10. Computer generated hologram from point cloud using graphics processor.

    PubMed

    Chen, Rick H-Y; Wilkinson, Timothy D

    2009-12-20

    Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.

  11. Computer-generated mineral commodity deposit maps

    USGS Publications Warehouse

    Schruben, Paul G.; Hanley, J. Thomas

    1983-01-01

    This report describes an automated method of generating deposit maps of mineral commodity information. In addition, it serves as a user's manual for the authors' mapping system. Procedures were developed which allow commodity specialists to enter deposit information, retrieve selected data, and plot deposit symbols in any geographic area within the conterminous United States. The mapping system uses both micro- and mainframe computers. The microcomputer is used to input and retrieve information, thus minimizing computing charges. The mainframe computer is used to generate map plots which are printed by a Calcomp plotter. Selector V data base system is employed for input and retrieval on the microcomputer. A general mapping program (Genmap) was written in FORTRAN for use on the mainframe computer. Genmap can plot fifteen symbol types (for point locations) in three sizes. The user can assign symbol types to data items interactively. Individual map symbols can be labeled with a number or the deposit name. Genmap also provides several geographic boundary file and window options.

  12. Learning Universal Computations with Spikes

    PubMed Central

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  13. Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography.

    PubMed

    Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C

    1993-05-10

    An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.

  14. A grid-embedding transonic flow analysis computer program for wing/nacelle configurations

    NASA Technical Reports Server (NTRS)

    Atta, E. H.; Vadyak, J.

    1983-01-01

    An efficient grid-interfacing zonal algorithm was developed for computing the three-dimensional transonic flow field about wing/nacelle configurations. the algorithm uses the full-potential formulation and the AF2 approximate factorization scheme. The flow field solution is computed using a component-adaptive grid approach in which separate grids are employed for the individual components in the multi-component configuration, where each component grid is optimized for a particular geometry such as the wing or nacelle. The wing and nacelle component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region using trivariate interpolation. This report represents a discussion of the computational methods used to generate both the wing and nacelle component grids, the technique used to interface the component grids, and the method used to obtain the inviscid flow solution. Computed results and correlations with experiment are presented. also presented are discussions on the organization of the wing grid generation (GRGEN3) and nacelle grid generation (NGRIDA) computer programs, the grid interface (LK) computer program, and the wing/nacelle flow solution (TWN) computer program. Descriptions of the respective subroutines, definitions of the required input parameters, a discussion on interpretation of the output, and the sample cases illustrating application of the analysis are provided for each of the four computer programs.

  15. Optical Interconnection Via Computer-Generated Holograms

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Zhou, Shaomin

    1995-01-01

    Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.

  16. A computer-based physics laboratory apparatus: Signal generator software

    NASA Astrophysics Data System (ADS)

    Thanakittiviroon, Tharest; Liangrocapart, Sompong

    2005-09-01

    This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.

  17. Operational procedure for computer program for design point characteristics of a gas generator or a turbojet lift engine for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1972-01-01

    The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.

  18. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  19. Input data requirements for special processors in the computation system containing the VENTURE neutronics code. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.

  20. New coding technique for computer generated holograms.

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  1. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requires permanent isolation. 13. The term electricity (kilowatt hours) generated and sold means gross...-type documents or computer software (including computer programs, computer software data bases, and...

  2. Perceptual factors that influence use of computer enhanced visual displays

    NASA Technical Reports Server (NTRS)

    Littman, David; Boehm-Davis, Debbie

    1993-01-01

    This document is the final report for the NASA/Langley contract entitled 'Perceptual Factors that Influence Use of Computer Enhanced Visual Displays.' The document consists of two parts. The first part contains a discussion of the problem to which the grant was addressed, a brief discussion of work performed under the grant, and several issues suggested for follow-on work. The second part, presented as Appendix I, contains the annual report produced by Dr. Ann Fulop, the Postdoctoral Research Associate who worked on-site in this project. The main focus of this project was to investigate perceptual factors that might affect a pilot's ability to use computer generated information that is projected into the same visual space that contains information about real world objects. For example, computer generated visual information can identify the type of an attacking aircraft, or its likely trajectory. Such computer generated information must not be so bright that it adversely affects a pilot's ability to perceive other potential threats in the same volume of space. Or, perceptual attributes of computer generated and real display components should not contradict each other in ways that lead to problems of accommodation and, thus, distance judgments. The purpose of the research carried out under this contract was to begin to explore the perceptual factors that contribute to effective use of these displays.

  3. External audio for IBM-compatible computers

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    Numerous applications benefit from the presentation of computer-generated auditory stimuli at points discontiguous with the computer itself. Modification of an IBM-compatible computer for use of an external speaker is relatively easy but not intuitive. This modification is briefly described.

  4. A Short History of the Computer.

    ERIC Educational Resources Information Center

    Reid-Green, Keith

    1981-01-01

    Beginning with Blaise Pascal's adding machine (1642), a brief look is taken at mechanical computers, electronic developments, transistors, and first, second, and third generation computers. A glossary is appended. (KC)

  5. Deterrence of device counterfeiting, cloning, and subversion by substitution using hardware fingerprinting

    DOEpatents

    Hamlet, Jason R; Bauer, Todd M; Pierson, Lyndon G

    2014-09-30

    Deterrence of device subversion by substitution may be achieved by including a cryptographic fingerprint unit within a computing device for authenticating a hardware platform of the computing device. The cryptographic fingerprint unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware platform. The PUF circuit is used to generate a PUF value. A key generator is coupled to generate a private key and a public key based on the PUF value while a decryptor is coupled to receive an authentication challenge posed to the computing device and encrypted with the public key and coupled to output a response to the authentication challenge decrypted with the private key.

  6. The pedagogical toolbox: computer-generated visual displays, classroom demonstration, and lecture.

    PubMed

    Bockoven, Jerry

    2004-06-01

    This analogue study compared the effectiveness of computer-generated visual displays, classroom demonstration, and traditional lecture as methods of instruction used to teach neuronal structure and processes. Randomly assigned 116 undergraduate students participated in 1 of 3 classrooms in which they experienced the same content but different teaching approaches presented by 3 different student-instructors. Then participants completed a survey of their subjective reactions and a measure of factual information designed to evaluate objective learning outcomes. Participants repeated this factual measure 5 wk. later. Results call into question the use of classroom demonstration methods as well as the trend towards devaluing traditional lecture in favor of computer-generated visual display.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, Jozsef; Christon, Mark A.; Francois, Marianne M.

    Progress is reported on computational capabilities for the grid-to-rod-fretting (GTRF) problem of pressurized water reactors. Numeca's Hexpress/Hybrid mesh generator is demonstrated as an excellent alternative to generating computational meshes for complex flow geometries, such as in GTRF. Mesh assessment is carried out using standard industrial computational fluid dynamics practices. Hydra-TH, a simulation code developed at LANL for reactor thermal-hydraulics, is demonstrated on hybrid meshes, containing different element types. A series of new Hydra-TH calculations has been carried out collecting turbulence statistics. Preliminary results on the newly generated meshes are discussed; full analysis will be documented in the L3 milestone, THM.CFD.P5.05,more » Sept. 2012.« less

  8. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2017-01-01

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.

  9. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  10. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  11. BT-Nurse: computer generation of natural language shift summaries from complex heterogeneous medical data.

    PubMed

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave

    2011-01-01

    The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed.

  12. BT-Nurse: computer generation of natural language shift summaries from complex heterogeneous medical data

    PubMed Central

    Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave

    2011-01-01

    The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed. PMID:21724739

  13. Computational Nanotechnology of Molecular Materials, Electronics, and Actuators with Carbon Nanotubes and Fullerenes

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Menon, Madhu; Cho, Kyeongjae; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The role of computational nanotechnology in developing next generation of multifunctional materials, molecular scale electronic and computing devices, sensors, actuators, and machines is described through a brief review of enabling computational techniques and few recent examples derived from computer simulations of carbon nanotube based molecular nanotechnology.

  14. Computational Fluid Dynamics: Past, Present, And Future

    NASA Technical Reports Server (NTRS)

    Kutler, Paul

    1988-01-01

    Paper reviews development of computational fluid dynamics and explores future prospects of technology. Report covers such topics as computer technology, turbulence, development of solution methodology, developemnt of algorithms, definition of flow geometries, generation of computational grids, and pre- and post-data processing.

  15. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  17. Student Engagement with Computer-Generated Feedback: A Case Study

    ERIC Educational Resources Information Center

    Zhang, Zhe

    2017-01-01

    In order to benefit from feedback on their writing, students need to engage effectively with it. This article reports a case study on student engagement with computer-generated feedback, known as automated writing evaluation (AWE) feedback, in an EFL context. Differing from previous studies that explored commercially available AWE programs, this…

  18. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  19. Computer Generated Optical Illusions: A Teaching and Research Tool.

    ERIC Educational Resources Information Center

    Bailey, Bruce; Harman, Wade

    Interactive computer-generated simulations that highlight psychological principles were investigated in this study in which 33 female and 19 male undergraduate college student volunteers of median age 21 matched line and circle sizes in six variations of Ponzo's illusion. Prior to working with the illusions, data were collected based on subjects'…

  20. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  1. Dr. Sanger's Apprentice: A Computer-Aided Instruction to Protein Sequencing.

    ERIC Educational Resources Information Center

    Schmidt, Thomas G.; Place, Allen R.

    1985-01-01

    Modeled after the program "Mastermind," this program teaches students the art of protein sequencing. The program (written in Turbo Pascal for the IBM PC, requiring 128K, a graphics adapter, and an 8070 mathematics coprocessor) generates a polypeptide whose sequence and length can be user-defined (for practice) or computer-generated (for…

  2. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics. Final Report.

    ERIC Educational Resources Information Center

    Stenger, Anthony J.; And Others

    This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

  3. The Generative Effects of Instructional Organizers with Computer-Based Interactive Video.

    ERIC Educational Resources Information Center

    Kenny, Richard F.

    This study compared the use of three instructional organizers--the advance organizer (AO), the participatory pictorial graphic organizer (PGO), and the final form pictorial graphic organizer (FGO)--in the design and use of computer-based interactive video (CBIV) programs. That is, it attempted to determine whether a less generative or more…

  4. Improving Learning in Computer-Based Instruction through Questioning and Grouping Strategies

    ERIC Educational Resources Information Center

    Niemczyk, Mary; Savenye, Wilhelmina

    2010-01-01

    This study investigated the comparative effects of adjunct questions, student self-generated questions, and note taking on learning from a multimedia database. High school students worked individually or in cooperative dyads on a computer-based multimedia unit using a study guide to answer either adjunct questions, generate self-questions, or take…

  5. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    ERIC Educational Resources Information Center

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  6. THREE-PEE SAMPLING THEORY and program 'THRP' for computer generation of selection criteria

    Treesearch

    L. R. Grosenbaugh

    1965-01-01

    Theory necessary for sampling with probability proportional to prediction ('three-pee,' or '3P,' sampling) is first developed and then exemplified by numerical comparisons of several estimators. Program 'T RP' for computer generation of appropriate 3P-sample-selection criteria is described, and convenient random integer dispensers are...

  7. Use of an Automatic Problem Generator to Teach Basic Skills in a First Course in Assembly Language.

    ERIC Educational Resources Information Center

    Benander, Alan; And Others

    1989-01-01

    Discussion of the use of computer aided instruction (CAI) and instructional software in college level courses highlights an automatic problem generator, AUTOGEN, that was written for computer science students learning assembly language. Design of the software is explained, and student responses are reported. (nine references) (LRW)

  8. Generative Computer-Assisted Instruction and Artificial Intelligence. Report No. 5.

    ERIC Educational Resources Information Center

    Sinnott, Loraine T.

    This paper reviews the state-of-the-art in generative computer-assisted instruction and artificial intelligence. It divides relevant research into three areas of instructional modeling: models of the subject matter; models of the learner's state of knowledge; and models of teaching strategies. Within these areas, work sponsored by Advanced…

  9. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  10. GRID3O- FAST GENERATION OF MULTILEVEL, THREE-DIMENSIONAL BOUNDARY-CONFORMING O-TYPE COMPUTATIONAL GRIDS

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1994-01-01

    A fast algorithm has been developed for accurately generating boundary-conforming, three-dimensional consecutively refined computational grids applicable to arbitrary wing-body and axial turbomachinery geometries. This algorithm has been incorporated into the GRID3O computer program. The method employed in GRID3O is based on using an analytic function to generate two-dimensional grids on a number of coaxial axisymmetric surfaces positioned between the centerbody and the outer radial boundary. These grids are of the O-type and are characterized by quasi-orthogonality, geometric periodicity, and an adequate resolution throughout the flow field. Because the built-in nonorthogonal coordinate stretching and shearing cause the grid lines leaving the blade or wing trailing-edge to end at downstream infinity, use of the generated grid simplifies the numerical treatment of three-dimensional trailing vortex sheets. The GRID3O program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 450K of 8 bit bytes. The GRID3O program was developed in 1981.

  11. Design, fabrication and characterization of Computer Generated Holograms for anti-counterfeiting applications using OAM beams as light decoders.

    PubMed

    Ruffato, Gianluca; Rossi, Roberto; Massari, Michele; Mafakheri, Erfan; Capaldo, Pietro; Romanato, Filippo

    2017-12-21

    In this paper, we present the design, fabrication and optical characterization of computer-generated holograms (CGH) encoding information for light beams carrying orbital angular momentum (OAM). Through the use of a numerical code, based on an iterative Fourier transform algorithm, a phase-only diffractive optical element (PO-DOE) specifically designed for OAM illumination has been computed, fabricated and tested. In order to shape the incident beam into a helicoidal phase profile and generate light carrying phase singularities, a method based on transmission through high-order spiral phase plates (SPPs) has been used. The phase pattern of the designed holographic DOEs has been fabricated using high-resolution Electron-Beam Lithography (EBL) over glass substrates coated with a positive photoresist layer (polymethylmethacrylate). To the best of our knowledge, the present study is the first attempt, in a comprehensive work, to design, fabricate and characterize computer-generated holograms encoding information for structured light carrying OAM and phase singularities. These optical devices appear promising as high-security optical elements for anti-counterfeiting applications.

  12. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  13. Computer Animated Representations to Optically Observe Numerical Evaluations (CARTOONE). Computer Generated Animations of Solid Bodies.

    DTIC Science & Technology

    1983-04-01

    20. it diferent hrem Report) IS. SUPPLEMENTARY NOTES It. KEY WORDS (Conthu on revere* side if neceeemy md Identify by bock number) Computer Generated...ABSTRACT (Continue an revere side If neceeary end Identity by block mmbr) This report documents the work done in-house by personnel of ASD/ENFTC to develop a...unfamiliar with the system. This report contains a User’s Guide and documents the work done to develop CARTOONE. The work was accomplished from

  14. Mechanical Aspects of Interfaces and Surfaces in Ceramic Containing Systems.

    DTIC Science & Technology

    1984-12-14

    of a computer model to simulate the crack damage. The model is based on the fracture mechanics of cracks engulfed by the short stress pulse generated...by drop impact. Inertial effects of the crack faces are a particularly important aspect of the model. The computer scheme thereby allows the stress...W. R. Beaumont, "On the Toughness of Particulate Filled Polymers." Water Drop Impact X. E. D. Case and A. G. Evans, "A Computer -Generated Simulation

  15. The “Silent Dog” Method: Analyzing the Impact of Self-Generated Rules When Teaching Different Computer Chains to Boys with Autism

    PubMed Central

    Arntzen, Erik; Halstadtro, Lill-Beathe; Halstadtro, Monica

    2009-01-01

    The purpose of the study was to extend the literature on verbal self-regulation by using the “silent dog” method to evaluate the role of verbal regulation over nonverbal behavior in 2 individuals with autism. Participants were required to talk-aloud while performing functional computer tasks.Then the effects of distracters with increasing demands on target behavior were evaluated as well as whether self-talk emitted by Participant 1 could be used to alter Participant 2's performance. Results suggest that participants' tasks seemed to be under control of self-instructions, and the rules generated from Participants 1's self-talk were effective in teaching computer skills to Participant 2. The silent dog method was useful in evaluating the possible role of self-generated rules in teaching computer skills to participants with autism. PMID:22477428

  16. The "silent dog" method: analyzing the impact of self-generated rules when teaching different computer chains to boys with autism.

    PubMed

    Arntzen, Erik; Halstadtro, Lill-Beathe; Halstadtro, Monica

    2009-01-01

    The purpose of the study was to extend the literature on verbal self-regulation by using the "silent dog" method to evaluate the role of verbal regulation over nonverbal behavior in 2 individuals with autism. Participants were required to talk-aloud while performing functional computer tasks.Then the effects of distracters with increasing demands on target behavior were evaluated as well as whether self-talk emitted by Participant 1 could be used to alter Participant 2's performance. Results suggest that participants' tasks seemed to be under control of self-instructions, and the rules generated from Participants 1's self-talk were effective in teaching computer skills to Participant 2. The silent dog method was useful in evaluating the possible role of self-generated rules in teaching computer skills to participants with autism.

  17. Parametric Design of Injectors for LDI-3 Combustors

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Mongia, Hukam; Lee, Phil

    2015-01-01

    Application of a partially calibrated National Combustion Code (NCC) for providing guidance in the design of the 3rd generation of the Lean-Direct Injection (LDI) multi-element combustion configuration (LDI-3) is summarized. NCC was used to perform non-reacting and two-phase reacting flow computations on several LDI-3 injector configurations in a single-element and a five-element injector array. All computations were performed with a consistent approach for mesh-generation, turbulence, spray simulations, ignition and chemical kinetics-modeling. Both qualitative and quantitative assessment of the computed flowfield characteristics of the several design options led to selection of an optimal injector LDI- 3 design that met all the requirements including effective area, aerodynamics and fuel-air mixing criteria. Computed LDI-3 emissions (namely, NOx, CO and UHC) will be compared with the prior generation LDI- 2 combustor experimental data at relevant engine cycle conditions.

  18. RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchel, K.; Greibach, R.J.; Porter, C.E.

    A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)

  19. Simple proof of equivalence between adiabatic quantum computation and the circuit model.

    PubMed

    Mizel, Ari; Lidar, Daniel A; Mitchell, Morgan

    2007-08-17

    We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.

  20. Lander Trajectory Reconstruction computer program

    NASA Technical Reports Server (NTRS)

    Adams, G. L.; Bradt, A. J.; Ferguson, J. B.; Schnelker, H. J.

    1971-01-01

    The Lander Trajectory Reconstruction (LTR) computer program is a tool for analysis of the planetary entry trajectory and atmosphere reconstruction process for a lander or probe. The program can be divided into two parts: (1) the data generator and (2) the reconstructor. The data generator provides the real environment in which the lander or probe is presumed to find itself. The reconstructor reconstructs the entry trajectory and atmosphere using sensor data generated by the data generator and a Kalman-Schmidt consider filter. A wide variety of vehicle and environmental parameters may be either solved-for or considered in the filter process.

  1. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  2. A comparative study between xerographic, computer-assisted overlay generation and animated-superimposition methods in bite mark analyses.

    PubMed

    Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran

    2016-09-01

    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Fast generation of Fresnel holograms based on multirate filtering.

    PubMed

    Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung

    2009-12-01

    One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.

  4. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.

  5. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  6. Computer-generated predictions of the structure and of the IR and Raman spectra of VX. Final report, May-August 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameka, H.F.; Jensen, J.O.

    1993-05-01

    This report presents the computed optimized geometry and vibrational IR and Raman frequencies of the V-agent VX. The computations are performed with the Gaussian 90 Program Package using 6-31G* basis sets. We assign the vibrational frequencies and correct each frequency by multiplying it with a previously derived 6-31G* correction factor. The result is a computer-generated prediction of the IR and Raman spectra of VX. This study was intended as a blind test of the utility of IR spectral prediction. Therefore, we intentionally did not look at experimental data on the IR and Raman spectra of VX.... IR Spectra, VX, Ramanmore » spectra, Computer predictions.« less

  7. Extending compile-time reverse mode and exploiting partial separability in ADIFOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; El-Khadiri, M.

    1992-10-01

    The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less

  8. XSECT: A computer code for generating fuselage cross sections - user's manual

    NASA Technical Reports Server (NTRS)

    Ames, K. R.

    1982-01-01

    A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.

  9. Fast dictionary generation and searching for magnetic resonance fingerprinting.

    PubMed

    Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang

    2017-07-01

    A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.

  10. Implementation of cryptographic hash function SHA256 in C++

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash

    2012-02-01

    This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.

  11. Design quadrilateral apertures in binary computer-generated holograms of large space bandwidth product.

    PubMed

    Wang, Jing; Sheng, Yunlong

    2016-09-20

    A new approach for designing the binary computer-generated hologram (CGH) of a very large number of pixels is proposed. Diffraction of the CGH apertures is computed by the analytical Abbe transform and by considering the aperture edges as the basic diffracting elements. The computation cost is independent of the CGH size. The arbitrary-shaped polygonal apertures in the CGH consist of quadrilateral apertures, which are designed by assigning the binary phases using the parallel genetic algorithm with a local search, followed by optimizing the locations of the co-vertices with a direct search. The design results in high performance with low image reconstruction error.

  12. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  13. Program Helps Generate And Manage Graphics

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.

  14. Digital computer programs for generating oblique orthographic projections and contour plots

    NASA Technical Reports Server (NTRS)

    Giles, G. L.

    1975-01-01

    User and programer documentation is presented for two programs for automatic plotting of digital data. One of the programs generates oblique orthographic projections of three-dimensional numerical models and the other program generates contour plots of data distributed in an arbitrary planar region. A general description of the computational algorithms, user instructions, and complete listings of the programs is given. Several plots are included to illustrate various program options, and a single example is described to facilitate learning the use of the programs.

  15. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  16. Horizontal steam generator thermal-hydraulics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ubra, O.; Doubek, M.

    1995-09-01

    Horizontal steam generators are typical components of nuclear power plants with pressure water reactor type VVER. Thermal-hydraulic behavior of horizontal steam generators is very different from the vertical U-tube steam generator, which has been extensively studied for several years. To contribute to the understanding of the horizontal steam generator thermal-hydraulics a computer program for 3-D steady state analysis of the PGV-1000 steam generator has been developed. By means of this computer program, a detailed thermal-hydraulic and thermodynamic study of the horizontal steam generator PGV-1000 has been carried out and a set of important steam generator characteristics has been obtained. Themore » 3-D distribution of the void fraction and 3-D level profile as functions of load and secondary side pressure have been investigated and secondary side volumes and masses as functions of load and pressure have been evaluated. Some of the interesting results of calculations are presented in the paper.« less

  17. Computation of the turbulent boundary layer downstream of vortex generators

    NASA Astrophysics Data System (ADS)

    Chang, Paul K.

    1987-12-01

    The approximate analysis of three-dimensional incompressible turbulent boundary layer downstream of vortex generators is presented. Extensive numerical computations are carried out to assess the effectiveness of single-row, counter-rotating vane-type vortex generators to alleviate flow separation lines. Flow separation downstream of the vortex generators on a thick airfoil are determined in terms of size, location, and arrangement of the vortex generators. These lines are compared with the separation line without the vortex generators. High efficiency is obtained with the moderately slender rectangular blade of the generator. The results indicate that separations is alleviated more effectively in the region closer to the symmetry axis of the generator than in the outer region of the symmetry axis. No optimum conditions for the alleviation of flow separation are established in this investigation, and no comparisons are made with other analytical results and experimental data.

  18. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  19. Networked Microcomputers--The Next Generation in College Computing.

    ERIC Educational Resources Information Center

    Harris, Albert L.

    The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…

  20. Can Computers Be Used Successfully for Teaching College Mathematics?

    ERIC Educational Resources Information Center

    Hatfield, Steven H.

    1976-01-01

    Author states that the use of computers in mathematics courses tends to generate interest in course subject matter and make learning a less passive experience. Computers also introduce students to computer science as a field of study, and provide basic knowledge of computers as an important aspect of today's technology. (Author/RW)

  1. Future trends in computer waste generation in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  3. Two-Dimensional Grids About Airfoils and Other Shapes

    NASA Technical Reports Server (NTRS)

    Sorenson, R.

    1982-01-01

    GRAPE computer program generates two-dimensional finite-difference grids about airfoils and other shapes by use of Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including limited number of sharp corners. Numerically stable and computationally fast, GRAPE provides aerodynamic analyst with efficient and consistant means of grid generation.

  4. Infrared Testing of the Wide-field Infrared Survey Telescope Grism Using Computer Generated Holograms

    NASA Technical Reports Server (NTRS)

    Dominguez, Margaret Z.; Content, David A.; Gong, Qian; Griesmann, Ulf; Hagopian, John G.; Marx, Catherine T; Whipple, Arthur L.

    2017-01-01

    Infrared Computer Generated Holograms (CGHs) were designed, manufactured and used to measure the performance of the grism (grating prism) prototype which includes testing Diffractive Optical Elements (DOE). The grism in the Wide Field Infrared Survey Telescope (WFIRST) will allow the surveying of a large section of the sky to find bright galaxies.

  5. Monitor Tone Generates Stress in Computer and VDT Operators: A Preliminary Study.

    ERIC Educational Resources Information Center

    Dow, Caroline; Covert, Douglas C.

    A near-ultrasonic pure tone of 15,570 Herz generated by flyback transformers in computer and video display terminal (VDT) monitors may cause severe non-specific irritation or stress disease in operators. Women hear higher frequency sounds than men and are twice as sensitive to "too loud" noise. Pure tones at high frequencies are more…

  6. Interpretation of forest characteristics from computer-generated images.

    Treesearch

    T.M. Barrett; H.R. Zuuring; T. Christopher

    2006-01-01

    The need for effective communication in the management and planning of forested landscapes has led to a substantial increase in the use of visual information. Using forest plots from California, Oregon, and Washington, and a survey of 183 natural resource professionals in these states, we examined the use of computer-generated images to convey information about forest...

  7. JPKWIC - General key word in context and subject index report generator

    NASA Technical Reports Server (NTRS)

    Jirka, R.; Kabashima, N.; Kelly, D.; Plesset, M.

    1968-01-01

    JPKWIC computer program is a general key word in context and subject index report generator specifically developed to help nonprogrammers and nontechnical personnel to use the computer to access files, libraries and mass documentation. This program is designed to produce a KWIC index, a subject index, an edit report, a summary report, and an exclusion list.

  8. 21 CFR 514.80 - Records and reports concerning experience with approved new animal drugs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... and 2301? How can I get them? Can I use computer-generated equivalents? 514.80(d) Reporting forms. How... provided on the forms. Computer-generated equivalents of Form FDA 1932 or Form FDA 2301, approved by FDA... Medicine, Division of Surveillance (HFV-210), 7500 Standish Pl., Rockville, MD 20855-2764. (e) Records to...

  9. 21 CFR 514.80 - Records and reports concerning experience with approved new animal drugs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... and 2301? How can I get them? Can I use computer-generated equivalents? 514.80(d) Reporting forms. How... provided on the forms. Computer-generated equivalents of Form FDA 1932 or Form FDA 2301, approved by FDA... Medicine, Division of Surveillance (HFV-210), 7500 Standish Pl., Rockville, MD 20855-2764. (e) Records to...

  10. 21 CFR 514.80 - Records and reports concerning experience with approved new animal drugs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... and 2301? How can I get them? Can I use computer-generated equivalents? 514.80(d) Reporting forms. How... provided on the forms. Computer-generated equivalents of Form FDA 1932 or Form FDA 2301, approved by FDA... Medicine, Division of Surveillance (HFV-210), 7500 Standish Pl., Rockville, MD 20855-2764. (e) Records to...

  11. 21 CFR 514.80 - Records and reports concerning experience with approved new animal drugs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and 2301? How can I get them? Can I use computer-generated equivalents? 514.80(d) Reporting forms. How... provided on the forms. Computer-generated equivalents of Form FDA 1932 or Form FDA 2301, approved by FDA... Medicine, Division of Surveillance (HFV-210), 7500 Standish Pl., Rockville, MD 20855-2764. (e) Records to...

  12. 21 CFR 514.80 - Records and reports concerning experience with approved new animal drugs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and 2301? How can I get them? Can I use computer-generated equivalents? 514.80(d) Reporting forms. How... provided on the forms. Computer-generated equivalents of Form FDA 1932 or Form FDA 2301, approved by FDA... Medicine, Division of Surveillance (HFV-210), 7500 Standish Pl., Rockville, MD 20855-2764. (e) Records to...

  13. Leveraging Learning for Generation I [and] The Haves and Have Nots of the Digital Divide.

    ERIC Educational Resources Information Center

    Angulo, Martha; Feldman, Sandra

    2001-01-01

    The Internet's effects are spreading. Schools are purchasing computer programs, assisted by state, federal, and corporate grants. K-12 schools spent nearly $7 billion on instructional technology in 2000. The digital divide is narrowing; Generation I kids have greater computer access at home and at school. In a sidebar, Sandra Feldman urges…

  14. Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.

    ERIC Educational Resources Information Center

    Oulman, Charles S.; Lee, Motoko Y.

    Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…

  15. Compensation for Transport Delays Produced by Computer Image Generation Systems. Cooperative Training Series.

    ERIC Educational Resources Information Center

    Ricard, G. L.; And Others

    The cooperative Navy/Air Force project described is aimed at the problem of image-flutter encountered when visual displays that present computer generated images are used for the simulation of certain flying situations. Two experiments are described which extend laboratory work on delay compensation schemes to the simulation of formation flight in…

  16. Chemistry for Kids: Generating Carbon Dioxide in Elementary School Chemistry and Using a Computer To Write about It.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.; Yoshida, Sarah

    This material describes an activity using vinegar and baking soda to generate carbon dioxide, and writing a report using the Appleworks word processing program for grades 3 to 8 students. Time requirement, relevant process skills, vocabulary, mathematics skills, computer skills, and materials are listed. Activity procedures including class…

  17. 75 FR 78806 - Agency Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-16

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt.... 2900-0474.'' SUPPLEMENTARY INFORMATION: Title: Create Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt, VA Form 26-8986. OMB Control Number: 2900...

  18. Effects of Self-Regulatory Status and Practice Type on Student Performance in the Mobile Learning Environment

    ERIC Educational Resources Information Center

    Tutty, Jeremy Ian

    2013-01-01

    The next generation of computer-based learning environments has arrived. This generation of technology is characterized by mobile and portable devices such as smartphones and tablet computers with wireless broadband access. With these devices comes the promise of extending the online learning revolution. The purpose of this study was to…

  19. Teaching French Transformational Grammar by Means of Computer-Generated Video-Tapes.

    ERIC Educational Resources Information Center

    Adler, Alfred; Thomas, Jean Jacques

    This paper describes a pilot program in an integrated media presentation of foreign languages and the production and usage of seven computer-generated video tapes which demonstrate various aspects of French syntax. This instructional set could form the basis for CAI lessons in which the student is presented images identical to those on the video…

  20. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  1. Next Generation Multimedia Distributed Data Base Systems

    NASA Technical Reports Server (NTRS)

    Pendleton, Stuart E.

    1997-01-01

    The paradigm of client/server computing is changing. The model of a server running a monolithic application and supporting clients at the desktop is giving way to a different model that blurs the line between client and server. We are on the verge of plunging into the next generation of computing technology--distributed object-oriented computing. This is not only a change in requirements but a change in opportunities, and requires a new way of thinking for Information System (IS) developers. The information system demands caused by global competition are requiring even more access to decision making tools. Simply, object-oriented technology has been developed to supersede the current design process of information systems which is not capable of handling next generation multimedia.

  2. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  3. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE PAGES

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...

    2016-09-29

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  4. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  5. STS-41 mission charts, computer-generated and artist concept drawings, photos

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-41 related charts, computer-generated and artist concept drawings, and photos of the Ulysses spacecraft and mission flight path provided by the European Space Agency (ESA). Charts show the Ulysses mission flight path and encounter with Jupiter (45980, 45981) and sun (illustrating cosmic dust, gamma ray burst, magnetic field, x-rays, solar energetic particles, visible corona, interstellar gas, plasma wave, cosmic rays, solar radio noise, and solar wind) (45988). Computer-generated view shows the Ulysses spacecraft (45983). Artist concept illustrates Ulysses spacecraft deploy from the space shuttle payload bay (PLB) with the inertial upper stage (IUS) and payload assist module (PAM-S) visible (45984). Ulysses spacecraft is also shown undergoing preflight testing in the manufacturing facility (45985, 45986, 45987).

  6. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  7. Direct Synthesis of Microwave Waveforms for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew

    Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.

  8. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  9. Network Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less

  10. Interactive algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Wiese, M. R.

    1986-01-01

    An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.

  11. Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fermi Research Alliance; Northern Illinois University

    2015-07-15

    Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less

  12. Teaching ocean wave forecasting using computer-generated visualization and animation—Part 2: swell forecasting

    NASA Astrophysics Data System (ADS)

    Whitford, Dennis J.

    2002-05-01

    This paper, the second of a two-part series, introduces undergraduate students to ocean wave forecasting using interactive computer-generated visualization and animation. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Fortunately, the introduction of computers in the geosciences provides a tool for addressing this problem. Computer-generated visualization and animation, accompanied by oral explanation, have been shown to be a pedagogical improvement to more traditional methods of instruction. Cartographic science and other disciplines using geographical information systems have been especially aggressive in pioneering the use of visualization and animation, whereas oceanography has not. This paper will focus on the teaching of ocean swell wave forecasting, often considered a difficult oceanographic topic due to the mathematics and physics required, as well as its interdependence on time and space. Several MATLAB ® software programs are described and offered to visualize and animate group speed, frequency dispersion, angular dispersion, propagation, and wave height forecasting of deep water ocean swell waves. Teachers may use these interactive visualizations and animations without requiring an extensive background in computer programming.

  13. End-user satisfaction of a patient education tool manual versus computer-generated tool.

    PubMed

    Tronni, C; Welebob, E

    1996-01-01

    This article reports a nonexperimental comparative study of end-user satisfaction before and after implementation of a vendor supplied computerized system (Micromedex, Inc) for providing up-to-date patient instructions regarding diseases, injuries, procedures, and medications. The purpose of this research was to measure the satisfaction of nurses who directly interact with a specific patient educational software application and to compare user satisfaction with manual versus computer generated materials. A computing satisfaction questionnaire that uses a scale of 1 to 5 (1 being the lowest) was used to measure end-user computing satisfaction in five constructs: content, accuracy, format, ease of use, and timeliness. Summary statistics were used to calculate mean ratings for each of the questionnaire's 12 items and for each of the five constructs. Mean differences between the ratings before and after implementation of the five constructs were significant by paired t test. Total user satisfaction improved with the computerized system, and the computer generated materials were given a higher rating than were the manual materials. Implications of these findings are discussed.

  14. Professional Computer Education Organizations--A Resource for Administrators.

    ERIC Educational Resources Information Center

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  15. Defining Computational Thinking for Mathematics and Science Classrooms

    ERIC Educational Resources Information Center

    Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri

    2016-01-01

    Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new…

  16. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  17. Topological transformation of fractional optical vortex beams using computer generated holograms

    NASA Astrophysics Data System (ADS)

    Maji, Satyajit; Brundavanam, Maruthi M.

    2018-04-01

    Optical vortex beams with fractional topological charges (TCs) are generated by the diffraction of a Gaussian beam using computer generated holograms embedded with mixed screw-edge dislocations. When the input Gaussian beam has a finite wave-front curvature, the generated fractional vortex beams show distinct topological transformations in comparison to the integer charge optical vortices. The topological transformations at different fractional TCs are investigated through the birth and evolution of the points of phase singularity, the azimuthal momentum transformation, occurrence of critical points in the transverse momentum and the vorticity around the singular points. This study is helpful to achieve better control in optical micro-manipulation applications.

  18. ORNL Resolved Resonance Covariance Generation for ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C.; Guber, Klaus H.; Wiarda, Dorothea

    2012-12-01

    Resonance-parameter covariance matrix (RPCM) evaluations in the resolved resonance regionwere done at the Oak Ridge National Laboratory (ORNL) for the chromium isotopes, titanium isotopes, 19F, 58Ni, 60Ni, 35Cl, 37Cl, 39K, 41K, 55Mn, 233U, 235U, 238U, and 239Pu using the computer code SAMMY. The retroactive approach of the code SAMMY was used to generate the RPCMs for 233U. For 235U, the approach used for covariance generation was similar to the retroactive approach with the distinction that real experimental data were used as opposed to data generated from the resonance parameters. RPCMs for 238U and 239Pu were generated together with the resonancemore » parameter evaluations. The RPCMs were then converted in the ENDF format using the FILE32 representation. Alternatively, for computer storage reasons, the FILE32 was converted in the FILE33 cross section covariance matrix (CSCM). Both representations were processed using the computer code PUFF-IV. This paper describes the procedures used to generate the RPCM and CSCM in the resonance region for ENDF/B-VII.1. The impact of data uncertainty in nuclear reactor benchmark calculations is also presented.« less

  19. Small file aggregation in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  20. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  1. Embedded Process Modeling, Analogy-Based Option Generation and Analytical Graphic Interaction for Enhanced User-Computer Interaction: An Interactive Storyboard of Next Generation User-Computer Interface Technology. Phase 1

    DTIC Science & Technology

    1988-03-01

    structure of the interface is a mapping from the physical world [for example, the use of icons, which S have inherent meaning to users but represent...design alternatives. Mechanisms for linking the user to the computer include physical devices (keyboards), actions taken with the devices (keystrokes...VALUATION AIDES TEMLATEI IITCOM1I LATOR IACTICAL KNOWLEDGE ACGIUISITION MICNnII t 1 Fig. 9. INTACVAL. * OtJiCTs ARE PHYSICAL ENTITIES OR CONCEPTUAL EN

  2. Use of UNIX in large online processor farms

    NASA Astrophysics Data System (ADS)

    Biel, Joseph R.

    1990-08-01

    There has been a recent rapid increase in the power of RISC computers running the UNIX operating system. Fermilab has begun to make use of these computers in the next generation of offline computer farms. It is also planning to use such computers in online computer farms. Issues involved in constructing online UNIX farms are discussed.

  3. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  4. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  5. Computation of shock wave/target interaction

    NASA Technical Reports Server (NTRS)

    Mark, A.; Kutler, P.

    1983-01-01

    Computational results of shock waves impinging on targets and the ensuing diffraction flowfield are presented. A number of two-dimensional cases are computed with finite difference techniques. The classical case of a shock wave/cylinder interaction is compared with shock tube data and shows the quality of the computations on a pressure-time plot. Similar results are obtained for a shock wave/rectangular body interaction. Here resolution becomes important and the use of grid clustering techniques tend to show good agreement with experimental data. Computational results are also compared with pressure data resulting from shock impingement experiments for a complicated truck-like geometry. Here of significance are the grid generation and clustering techniques used. For these very complicated bodies, grids are generated by numerically solving a set of elliptic partial differential equations.

  6. Extending compile-time reverse mode and exploiting partial separability in ADIFOR. ADIFOR Working Note No. 7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; El-Khadiri, M.

    1992-10-01

    The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less

  7. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  8. An analytical computation of magnetic field generated from a cylinder ferromagnet

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tomohiro

    2018-04-01

    An analytical formulation to compute a magnetic field generated from an uniformly magnetized cylinder ferromagnet is developed. Exact solutions of the magnetic field generated from the magnetization pointing in an arbitrary direction are derived, which are applicable both inside and outside the ferromagnet. The validities of the present formulas are confirmed by comparing them with demagnetization coefficients estimated in earlier works. The results will be useful for designing practical applications, such as high-density magnetic recording and microwave generators, where nanostructured ferromagnets are coupled to each other through the dipole interactions and show cooperative phenomena such as synchronization. As an example, the magnetic field generated from a spin torque oscillator for magnetic recording based on microwave assisted magnetization reversal is studied.

  9. Translator program converts computer printout into braille language

    NASA Technical Reports Server (NTRS)

    Powell, R. A.

    1967-01-01

    Computer program converts print image tape files into six dot Braille cells, enabling a blind computer programmer to monitor and evaluate data generated by his own programs. The Braille output is printed 8 lines per inch.

  10. Cognitive Model Exploration and Optimization: A New Challenge for Computational Science

    DTIC Science & Technology

    2010-03-01

    the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post

  11. Optimization of the Heat Exchangers of a Thermoelectric Generation System

    NASA Astrophysics Data System (ADS)

    Martínez, A.; Vián, J. G.; Astrain, D.; Rodríguez, A.; Berrio, I.

    2010-09-01

    The thermal resistances of the heat exchangers have a strong influence on the electric power produced by a thermoelectric generator. In this work, the heat exchangers of a thermoelectric generator have been optimized in order to maximize the electric power generated. This thermoelectric generator harnesses heat from the exhaust gas of a domestic gas boiler. Statistical design of experiments was used to assess the influence of five factors on both the electric power generated and the pressure drop in the chimney: height of the generator, number of modules per meter of generator height, length of the fins of the hot-side heat exchanger (HSHE), length of the gap between fins of the HSHE, and base thickness of the HSHE. The electric power has been calculated using a computational model, whereas Fluent computational fluid dynamics (CFD) has been used to obtain the thermal resistances of the heat exchangers and the pressure drop. Finally, the thermoelectric generator has been optimized, taking into account the restrictions on the pressure drop.

  12. The Nature of Computer Assisted Learning.

    ERIC Educational Resources Information Center

    Whiting, John

    Computer assisted learning (CAL) is an old technology which has generated much new interest. Computers can: reduce data to a directly comprehensible form; reduce administration; communicate worldwide and exchange, store, and retrieve data; and teach. The computer's limitation is in its dependence on the user's ability and perceptive nature.…

  13. Computer Skill Acquisition and Retention: The Effects of Computer-Aided Self-Explanation

    ERIC Educational Resources Information Center

    Chi, Tai-Yin

    2016-01-01

    This research presents an experimental study to determine to what extent computer skill learners can benefit from generating self-explanation with the aid of different computer-based visualization technologies. Self-explanation was stimulated with dynamic visualization (Screencast), static visualization (Screenshot), or verbal instructions only,…

  14. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    ERIC Educational Resources Information Center

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  15. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  16. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  17. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  18. Donald Norman's "The Invisible Computer" and Its Implications for Education.

    ERIC Educational Resources Information Center

    Frey, Joanne M.

    In "The Invisible Computer," Donald Norman illustrates his theory of invisible computers turning into information appliances with examples of past inventions like the radio, automobile, and phonograph. Second generation computers have evolved as far as technology will allow. At the present time, the technology itself is the driving force…

  19. Programmable personality interface for the dynamic infrared scene generator (IRSG2)

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Mobley, Scott B.; Mayhall, Anthony J.; Braselton, William J.

    1998-07-01

    As scene generator platforms begin to rely specifically on commercial off-the-shelf (COTS) hardware and software components, the need for high speed programmable personality interfaces (PPIs) are required for interfacing to Infrared (IR) flight computer/processors and complex IR projectors in the hardware-in-the-loop (HWIL) simulation facilities. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost effective PPIs to interface to COTS scene generators. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC) researchers have developed such a PPI to reside between the AMCOM MRDEC IR Scene Generator (IRSG) and either a missile flight computer or the dynamic Laser Diode Array Projector (LDAP). AMCOM MRDEC has developed several PPIs for the first and second generation IRSGs (IRSG1 and IRSG2), which are based on Silicon Graphics Incorporated (SGI) Onyx and Onyx2 computers with Reality Engine 2 (RE2) and Infinite Reality (IR/IR2) graphics engines. This paper provides an overview of PPIs designed, integrated, tested, and verified at AMCOM MRDEC, specifically the IRSG2's PPI.

  20. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  1. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  2. ALOG: A spreadsheet-based program for generating artificial logs

    Treesearch

    Matthew F. Winn; Randolph H. Wynne; Philip A. Araman

    2004-01-01

    Log sawing simulation computer programs can be valuable tools for training sawyers as well as for testing different sawing patterns. Most available simulation programs rely on databases from which to draw logs and can be very costly and time-consuming to develop. ALOG (Artificial LOg Generator) is a Microsoft Excel®-based computer program that was developed to...

  3. NNSA Administrator Addresses the Next Generation of Nuclear Security Professionals: Part 2

    ScienceCinema

    Thomas D'Agostino

    2017-12-09

    Administrator Thomas DAgostino of the National Nuclear Security Administration addressed the next generation of nuclear security professionals during the opening session of todays 2009 Department of Energy (DOE) Computational Science Graduate Fellowship Annual Conference. Administrator DAgostino discussed NNSAs role in implementing President Obamas nuclear security agenda and encouraged the computing science fellows to consider careers in nuclear security.

  4. Is the "Net Generation" Ready for Digital Citizenship? Perspectives from the IEA International Computer and Information Literacy Study 2013. Policy Brief No. 6

    ERIC Educational Resources Information Center

    Watkins, Ryan; Engel, Laura C.; Hastedt, Dirk

    2015-01-01

    The rise of digital information and communication technologies (ICT) has made the acquisition of computer and information literacy (CIL) a leading factor in creating an engaged, informed, and employable citizenry. However, are young people, often described as "digital natives" or the "net generation," developing the necessary…

  5. 75 FR 61252 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-04

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... information through the Federal Docket Management System (FDMS) at http://www.Regulations.gov or to Nancy J...

  6. 75 FR 61859 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-06

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... information through the Federal Docket Management System (FDMS) at http://www.Regulations.gov or to Nancy J...

  7. NNSA Administrator Addresses the Next Generation of Nuclear Security Professionals: Part 1

    ScienceCinema

    Thomas D'Agostino

    2017-12-09

    Administrator Thomas DAgostino of the National Nuclear Security Administration addressed the next generation of nuclear security professionals during the opening session of todays 2009 Department of Energy (DOE) Computational Science Graduate Fellowship Annual Conference. Administrator DAgostino discussed NNSAs role in implementing President Obamas nuclear security agenda and encouraged the computing science fellows to consider careers in nuclear security.

  8. 78 FR 59771 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt.... Title: Create Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt, VA Form 26-8986. OMB Control Number: 2900-0474. Type of Review: Revision of a...

  9. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library

    NASA Astrophysics Data System (ADS)

    Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid

    2018-02-01

    SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.

  10. GIM code user's manual for the STAR-100 computer. [for generating numerical analogs of the conversion laws

    NASA Technical Reports Server (NTRS)

    Spradley, L.; Pearson, M.

    1979-01-01

    The General Interpolants Method (GIM), a three dimensional, time dependent, hybrid procedure for generating numerical analogs of the conversion laws, is described. The Navier-Stokes equations written for an Eulerian system are considered. The conversion of the GIM code to the STAR-100 computer, and the implementation of 'GIM-ON-STAR' is discussed.

  11. Methodological Advances in Political Gaming: The One-Person Computer Interactive, Quasi-Rigid Rule Game.

    ERIC Educational Resources Information Center

    Shubik, Martin

    The main problem in computer gaming research is the initial decision of choosing the type of gaming method to be used. Free-form games lead to exciting open-ended confrontations that generate much information. However, they do not easily lend themselves to analysis because they generate far too much information and their results are seldom…

  12. Etch depth mapping of phase binary computer-generated holograms by means of specular spectroscopic scatterometry

    NASA Astrophysics Data System (ADS)

    Korolkov, Victor P.; Konchenko, Alexander S.; Cherkashin, Vadim V.; Mironnikov, Nikolay G.; Poleshchuk, Alexander G.

    2013-09-01

    Detailed analysis of etch depth map for phase binary computer-generated holograms intended for testing aspheric optics is a very important task. In particular, diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. We offer a simplified version of the specular spectroscopic scatterometry method. It is based on the spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase the phase depth of the grooves by a factor of 2 and measure more precisely shallow phase gratings. Measurement uncertainty is mainly defined by the following parameters: shifts of the spectrum maximums that occur due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method we describe can ensure 1% error. However, fiber spectrometers are more convenient for scanning measurements of large area computer-generated holograms. Our experimental system for characterization of binary computer-generated holograms was developed using a fiber spectrometer.

  13. Generation, estimation, utilization, availability and compatibility aspects of geodetic and meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luetzow, H.B.v.

    1983-08-01

    Following an introduction, the paper discusses in section 2 the collection or generation of final geodetic data from conventional surveys, satellite observations, satellite altimetry, the Global Positioning System, and moving base gravity gradiometers. Section 3 covers data utilization and accuracy aspects including gravity programmed inertial positioning and subterraneous mass detection. Section 4 addresses the usefulness and limitation of the collocation method of physical geodesy. Section 5 is concerned with the computation of classical climatological data. In section 6, meteorological data assimilation is considered. Section 7 deals with correlated aspects of initial data generation with emphasis on initial wind field determination,more » parameterized and classical hydrostatic prediction models, non-hydrostatic prediction, computational networks, and computer capacity. The paper concludes that geodetic and meteorological data are expected to become increasingly more diversified and voluminous both regionally and globally, that its general availability will be more or less restricted for some time to come, that its quality and quantity are subject to change, and that meteorological data generation, accuracy and density have to be considered in conjunction with advanced as well as cost-effective numerical weather prediction models and associated computational efforts.« less

  14. First-Principles Framework to Compute Sum-Frequency Generation Vibrational Spectra of Semiconductors and Insulators.

    PubMed

    Wan, Quan; Galli, Giulia

    2015-12-11

    We present a first-principles framework to compute sum-frequency generation (SFG) vibrational spectra of semiconductors and insulators. The method is based on density functional theory and the use of maximally localized Wannier functions to compute the response to electric fields, and it includes the effect of electric field gradients at surfaces. In addition, it includes quadrupole contributions to SFG spectra, thus enabling the verification of the dipole approximation, whose validity determines the surface specificity of SFG spectroscopy. We compute the SFG spectra of ice I_{h} basal surfaces and identify which spectra components are affected by bulk contributions. Our results are in good agreement with experiments at low temperature.

  15. A Zonal Approach for Prediction of Jet Noise

    NASA Technical Reports Server (NTRS)

    Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.

    1995-01-01

    A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.

  16. Reactor transient control in support of PFR/TREAT TUCOP experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, D.R.; Larsen, G.R.; Harrison, L.J.

    1984-01-01

    Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less

  17. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  18. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  19. Several examples where turbulence models fail in inlet flow field analysis

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    1993-01-01

    Computational uncertainties in turbulence modeling for three dimensional inlet flow fields include flows approaching separation, strength of secondary flow field, three dimensional flow predictions of vortex liftoff, and influence of vortex-boundary layer interactions; computational uncertainties in vortex generator modeling include representation of generator vorticity field and the relationship between generator and vorticity field. The objectives of the inlet flow field studies presented in this document are to advance the understanding, prediction, and control of intake distortion and to study the basic interactions that influence this design problem.

  20. GenIce: Hydrogen-Disordered Ice Generator.

    PubMed

    Matsumoto, Masakazu; Yagasaki, Takuma; Tanaka, Hideki

    2018-01-05

    GenIce is an efficient and user-friendly tool to generate hydrogen-disordered ice structures. It makes ice and clathrate hydrate structures in various file formats. More than 100 kinds of structures are preset. Users can install their own crystal structures, guest molecules, and file formats as plugins. The algorithm certifies that the generated structures are completely randomized hydrogen-disordered networks obeying the ice rule with zero net polarization. © 2017 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2017 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  1. The generative power of weighted one-sided and regular sticker systems

    NASA Astrophysics Data System (ADS)

    Siang, Gan Yee; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2014-06-01

    Sticker systems were introduced in 1998 as one of the DNA computing models by using the recombination behavior of DNA molecules. The Watson-Crick complementary principle of DNA molecules is abstractly used in the sticker systems to perform the computation of sticker systems. In this paper, the generative power of weighted one-sided sticker systems and weighted regular sticker systems are investigated. Moreover, the relationship of the families of languages generated by these two variants of sticker systems to the Chomsky hierarchy is also presented.

  2. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  3. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  4. Evaluation of computer-generated guidelines for companions of paediatric patients undergoing chemotherapy.

    PubMed

    Lopes, Vagner José; Shmeil, Marcos Augusto Hochuli

    2017-04-27

    To compare computer-generated guidelines with and without the use of a Clinical Decision Support System - Oncology Care and Healthcare for Chemotherapy Patients, for the caregivers of children undergoing chemotherapy. This is a descriptive, evaluative, and quantitative study conducted at a paediatrics hospital in Curitiba, Paraná, Brazil, from December 2015 to January 2016. The sample consisted of 58 participants divided into two groups: Group 1, without the aid of software, and Group 2, with the aid of the software. The data were analysed using the Mann-Whitney U test. The guidelines revealed a statistical significance (p<0.05), with a prevalence of a higher concordance average in Group 2 in comparison with Group 1. Computer-generated guidelines are a valuable qualitative support tool for nurses.

  5. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    NASA Astrophysics Data System (ADS)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  6. Aircraft geometry verification with enhanced computer generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  7. CFD Evaluation of a 3rd Generation LDI Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Mongia, Hukam; Lee, Phil

    2017-01-01

    An effort was undertaken to perform CFD analysis of fluid flow in Lean-Direct Injection (LDI) combustors with axial swirl-venturi elements for next-generation LDI-3 combustor design. The National Combustion Code (NCC) was used to perform non-reacting and two-phase reacting flow computations for a nineteen-element injector array arranged in a three-module, 7-5-7 element configuration. All computations were performed with a consistent approach of mesh-optimization, spray-modeling, ignition and kinetics-modeling with the NCC. Computational predictions of the aerodynamics of the injector were used to arrive at an optimal injector design that meets effective area and fuel-air mixing criteria. LDI-3 emissions (EINOx, EICO and UHC) were compared with the previous generation LDI-2 combustor experimental data at representative engine cycle conditions.

  8. Description of the F-16XL Geometry and Computational Grids Used in CAWAPI

    NASA Technical Reports Server (NTRS)

    Boelens, O. J.; Badcock, K. J.; Gortz, S.; Morton, S.; Fritz, W.; Karman, S. L., Jr.; Michal, T.; Lamar, J. E.

    2009-01-01

    The objective of the Cranked-Arrow Wing Aerodynamics Project International (CAWAPI) was to allow a comprehensive validation of Computational Fluid Dynamics methods against the CAWAP flight database. A major part of this work involved the generation of high-quality computational grids. Prior to the grid generation an IGES file containing the air-tight geometry of the F-16XL aircraft was generated by a cooperation of the CAWAPI partners. Based on this geometry description both structured and unstructured grids have been generated. The baseline structured (multi-block) grid (and a family of derived grids) has been generated by the National Aerospace Laboratory NLR. Although the algorithms used by NLR had become available just before CAWAPI and thus only a limited experience with their application to such a complex configuration had been gained, a grid of good quality was generated well within four weeks. This time compared favourably with that required to produce the unstructured grids in CAWAPI. The baseline all-tetrahedral and hybrid unstructured grids has been generated at NASA Langley Research Center and the USAFA, respectively. To provide more geometrical resolution, trimmed unstructured grids have been generated at EADS-MAS, the UTSimCenter, Boeing Phantom Works and KTH/FOI. All grids generated within the framework of CAWAPI will be discussed in the article. Both results obtained on the structured grids and the unstructured grids showed a significant improvement in agreement with flight test data in comparison with those obtained on the structured multi-block grid used during CAWAP.

  9. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  10. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.

    PubMed

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-05-08

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.

  11. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  12. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  13. The development of computer networks: First results from a microeconomic model

    NASA Astrophysics Data System (ADS)

    Maier, Gunther; Kaufmann, Alexander

    Computer networks like the Internet are gaining importance in social and economic life. The accelerating pace of the adoption of network technologies for business purposes is a rather recent phenomenon. Many applications are still in the early, sometimes even experimental, phase. Nevertheless, it seems to be certain that networks will change the socioeconomic structures we know today. This is the background for our special interest in the development of networks, in the role of spatial factors influencing the formation of networks, and consequences of networks on spatial structures, and in the role of externalities. This paper discusses a simple economic model - based on a microeconomic calculus - that incorporates the main factors that generate the growth of computer networks. The paper provides analytic results about the generation of computer networks. The paper discusses (1) under what conditions economic factors will initiate the process of network formation, (2) the relationship between individual and social evaluation, and (3) the efficiency of a network that is generated based on economic mechanisms.

  14. Computer grading of examinations

    NASA Technical Reports Server (NTRS)

    Frigerio, N. A.

    1969-01-01

    A method, using IBM cards and computer processing, automates examination grading and recording and permits use of computational problems. The student generates his own answers, and the instructor has much greater freedom in writing questions than is possible with multiple choice examinations.

  15. Energy expenditure in adolescents playing new generation computer games.

    PubMed

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2008-07-01

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Setting Research laboratories. Six boys and five girls aged 13-15 years. Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kl/kg/min), tennis (202.5 (31.5) kl/kg/min), and boxing (198.1 (33.9) kl/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kl/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kl/kg/min greater when playing active rather than sedentary games. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  16. TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1994-01-01

    TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.

  17. Computer Simulated Visual and Tactile Feedback as an Aid to Manipulator and Vehicle Control,

    DTIC Science & Technology

    1981-05-08

    STATEMENT ........................ 8 Artificial Intellegence Versus Supervisory Control ....... 8 Computer Generation of Operator Feedback...operator. Artificial Intelligence Versus Supervisory Control The use of computers to aid human operators can be divided into two catagories: artificial ...operator. Artificial intelligence ( A. I. ) attempts to give the computer maximum intelligence and to replace all operator functions by the computer

  18. High-End Computing for Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2001-01-01

    The objective of the First MIT Conference on Computational Fluid and Solid Mechanics (June 12-14, 2001) is to bring together industry and academia (and government) to nurture the next generation in computational mechanics. The objective of the current talk, 'High-End Computing for Incompressible Flows', is to discuss some of the current issues in large scale computing for mission-oriented tasks.

  19. Is There Computer Graphics after Multimedia?

    ERIC Educational Resources Information Center

    Booth, Kellogg S.

    Computer graphics has been driven by the desire to generate real-time imagery subject to constraints imposed by the human visual system. The future of computer graphics, when off-the-shelf systems have full multimedia capability and when standard computing engines render imagery faster than real-time, remains to be seen. A dedicated pipeline for…

  20. Changing a Generation's Way of Thinking: Teaching Computational Thinking through Programming

    ERIC Educational Resources Information Center

    Buitrago Flórez, Francisco; Casallas, Rubby; Hernández, Marcela; Reyes, Alejandro; Restrepo, Silvia; Danies, Giovanna

    2017-01-01

    Computational thinking (CT) uses concepts that are essential to computing and information science to solve problems, design and evaluate complex systems, and understand human reasoning and behavior. This way of thinking has important implications in computer sciences as well as in almost every other field. Therefore, we contend that CT should be…

  1. Computer-Based Tutoring of Visual Concepts: From Novice to Experts.

    ERIC Educational Resources Information Center

    Sharples, Mike

    1991-01-01

    Description of ways in which computers might be used to teach visual concepts discusses hypermedia systems; describes computer-generated tutorials; explains the use of computers to create learning aids such as concept maps, feature spaces, and structural models; and gives examples of visual concept teaching in medical education. (10 references)…

  2. 13 CFR 120.194 - Use of computer forms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Use of computer forms. 120.194... Applying to All Business Loans Computerized Sba Forms § 120.194 Use of computer forms. Any Applicant or Participant may use computer generated SBA application forms, closing forms, and other forms designated by SBA...

  3. 13 CFR 120.194 - Use of computer forms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Use of computer forms. 120.194... Applying to All Business Loans Computerized Sba Forms § 120.194 Use of computer forms. Any Applicant or Participant may use computer generated SBA application forms, closing forms, and other forms designated by SBA...

  4. 13 CFR 120.194 - Use of computer forms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Use of computer forms. 120.194... Applying to All Business Loans Computerized Sba Forms § 120.194 Use of computer forms. Any Applicant or Participant may use computer generated SBA application forms, closing forms, and other forms designated by SBA...

  5. Parallel computational fluid dynamics '91; Conference Proceedings, Stuttgart, Germany, Jun. 10-12, 1991

    NASA Technical Reports Server (NTRS)

    Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)

    1992-01-01

    A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.

  6. Data Science and Optimal Learning for Material Discovery and Design

    Science.gov Websites

    in computation and experimental techniques generating vast arrays of data, without a clear link to experimental and computational data, designing new materials and impacting computational models. This meeting computational and experimental data (c) Analysis of data from probes such as light sources, as well as other

  7. 13 CFR 120.194 - Use of computer forms.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Use of computer forms. 120.194... Applying to All Business Loans Computerized Sba Forms § 120.194 Use of computer forms. Any Applicant or Participant may use computer generated SBA application forms, closing forms, and other forms designated by SBA...

  8. 13 CFR 120.194 - Use of computer forms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Use of computer forms. 120.194... Applying to All Business Loans Computerized Sba Forms § 120.194 Use of computer forms. Any Applicant or Participant may use computer generated SBA application forms, closing forms, and other forms designated by SBA...

  9. Computing exact bundle compliance control charts via probability generating functions.

    PubMed

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  10. Material identification based upon energy-dependent attenuation of neutrons

    DOEpatents

    Marleau, Peter

    2015-10-06

    Various technologies pertaining to identifying a material in a sample and imaging the sample are described herein. The material is identified by computing energy-dependent attenuation of neutrons that is caused by presence of the sample in travel paths of the neutrons. A mono-energetic neutron generator emits the neutron, which is downscattered in energy by a first detector unit. The neutron exits the first detector unit and is detected by a second detector unit subsequent to passing through the sample. Energy-dependent attenuation of neutrons passing through the sample is computed based upon a computed energy of the neutron, wherein such energy can be computed based upon 1) known positions of the neutron generator, the first detector unit, and the second detector unit; or 2) computed time of flight of neutrons between the first detector unit and the second detector unit.

  11. Computer generated maps from digital satellite data - A case study in Florida

    NASA Technical Reports Server (NTRS)

    Arvanitis, L. G.; Reich, R. M.; Newburne, R.

    1981-01-01

    Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.

  12. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  13. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  14. Microprocessor Control of Low Speed VSTOL Flight.

    DTIC Science & Technology

    1979-06-08

    Analog IAS Indicated Air Speed I/O Input/Output KIAS Knots, Indicated Air Speed NATOPS Naval Air Training and Operating Procedures Standardization SAS...computer programming necessary in the research, and contain, in the form of computer- generated time histories, the results of the project. -17- I...of the aircraft causes airflow over the wings and therefore produces aerodynamic lift. As the transition progresses, wing- generated lift gradually

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolstad, J.W.; Haarman, R.A.

    The results of two transients involving the loss of a steam generator in a single-pass, steam generator, pressurized water reactor have been analyzed using a state-of-the-art, thermal-hydraulic computer code. Computed results include the formation of a steam bubble in the core while the pressurizer is solid. Calculations show that continued injection of high pressure water would have stopped the scenario. These are similar to the happenings at Three Mile Island.

  16. Test Generation for Highly Sequential Circuits

    DTIC Science & Technology

    1989-08-01

    Sequential CircuitsI Abhijit Ghosh, Srinivas Devadas , and A. Richard Newton Abstract We address the problem of generating test sequences for stuck-at...Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720. Devadas : Department of Electrical Engineering and Computer...attn1 b ~een propagatedl to ltne nnext state lites aloine. then we obtain tine fnalty Is as bit. valunes is called A miniteri state. Iti genecral. a

  17. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  18. COED Transactions, Vol. 8, No. 10, October 1976. The Computer Generation of Thermodynamic Phase Diagrams.

    ERIC Educational Resources Information Center

    Jolls, Kenneth R.; And Others

    A technique is described for the generation of perspective views of three-dimensional models using computer graphics. The technique is applied to models of familiar thermodynamic phase diagrams and the results are presented for the ideal gas and van der Waals equations of state as well as the properties of liquid water and steam from the Steam…

  19. Users manual for coordinate generation code CRDSRA

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.

    1985-01-01

    Generation of a viable coordinate system represents an important component of an isolated airfoil Navier-Stokes calculation. The manual describes a computer code for generation of such a coordinate system. The coordinate system is a general nonorthogonal one in which high resolution normal to the airfoil is obtained in the vicinity of the airfoil surface, and high resolution along the airfoil surface is obtained in the vicinity of the airfoil leading edge. The method of generation is a constructive technique which leads to a C type coordinate grid. The method of construction as well as input and output definitions are contained herein. The computer code itself as well as a sample output is being submitted to COSMIC.

  20. Requirements for Next Generation Comprehensive Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Data, Anubhav

    2008-01-01

    The unique demands of rotorcraft aeromechanics analysis have led to the development of software tools that are described as comprehensive analyses. The next generation of rotorcraft comprehensive analyses will be driven and enabled by the tremendous capabilities of high performance computing, particularly modular and scaleable software executed on multiple cores. Development of a comprehensive analysis based on high performance computing both demands and permits a new analysis architecture. This paper describes a vision of the requirements for this next generation of comprehensive analyses of rotorcraft. The requirements are described and substantiated for what must be included and justification provided for what should be excluded. With this guide, a path to the next generation code can be found.

  1. The influence of prophylactic factor VIII in severe hemophilia A

    PubMed Central

    Gissel, Matthew; Whelihan, Matthew F; Ferris, Lauren A; Mann, Kenneth G; Rivard, Georges E; Brummel-Ziedins, Kathleen E

    2013-01-01

    Introduction Hemophilia A individuals displaying a similar genetic defect have heterogeneous clinical phenotypes. Aim To evaluate the underlying effect of exogenous factor (f)VIII on tissue factor (Tf)-initiated blood coagulation in severe hemophilia utilizing both empirical and computational models. Methods We investigated twenty-five clinically severe hemophilia A patients. All individuals were on fVIII prophylaxis and had not received fVIII from 0.25 to 4 days prior to phlebotomy. Coagulation was initiated by the addition of Tf to contact-pathway inhibited whole blood ± an anti-fVIII antibody. Aliquots were quenched over 20 min and analyzed for thrombin generation and fibrin formation. Coagulation factor levels were obtained and used to computationally predict thrombin generation with fVIII set to either zero or its value at the time of the draw. Results Due to prophylactic fVIII, at the time of the blood draw, the individuals had fVIII levels that ranged from <1% to 22%. Thrombin generation (maximum level and rate) in both empirical and computational systems increased as the level of fVIII increased. FXIII activation rates also increased as the fVIII level increased. Upon suppression of fVIII, thrombin generation became comparable in both systems. Plasma composition analysis showed a negative correlation between bleeding history and computational thrombin generation in the absence of fVIII. Conclusion Residual prophylactic fVIII directly causes an increase in thrombin generation and fibrin cross-linking in individuals with clinically severe hemophilia A. The combination of each individual's coagulation factors (outside of fVIII) determine each individual's baseline thrombin potential and may affect bleeding risk. PMID:21899664

  2. Streamwise Vorticity Generation in Laminar and Turbulent Jets

    NASA Technical Reports Server (NTRS)

    Demuren, Aodeji O.; Wilson, Robert V.

    1999-01-01

    Complex streamwise vorticity fields are observed in the evolution of non-circular jets. Generation mechanisms are investigated via Reynolds-averaged (RANS), large-eddy (LES) and direct numerical (DNS) simulations of laminar and turbulent rectangular jets. Complex vortex interactions are found in DNS of laminar jets, but axis-switching is observed only when a single instability mode is present in the incoming mixing layer. With several modes present, the structures are not coherent and no axis-switching occurs, RANS computations also produce no axis-switching. On the other hand, LES of high Reynolds number turbulent jets produce axis-switching even for cases with several instability modes in the mixing layer. Analysis of the source terms of the mean streamwise vorticity equation through post-processing of the instantaneous results shows that, complex interactions of gradients of the normal and shear Reynolds stresses are responsible for the generation of streamwise vorticity which leads to axis-switching. RANS computations confirm these results. k - epsilon turbulence model computations fail to reproduce the phenomenon, whereas algebraic Reynolds stress model (ASM) computations, in which the secondary normal and shear stresses are computed explicitly, succeeded in reproducing the phenomenon accurately.

  3. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  4. From micro to mainframe. A practical approach to perinatal data processing.

    PubMed

    Yeh, S Y; Lincoln, T

    1985-04-01

    A new, practical approach to perinatal data processing for a large obstetric population is described. This was done with a microcomputer for data entry and a mainframe computer for data reduction. The Screen Oriented Data Access (SODA) program was used to generate the data entry form and to input data into the Apple II Plus computer. Data were stored on diskettes and transmitted through a modern and telephone line to the IBM 370/168 computer. The Statistical Analysis System (SAS) program was used for statistical analyses and report generations. This approach was found to be most practical, flexible, and economical.

  5. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rebay, S.

    This work is devoted to the description of an efficient unstructured mesh generation method entirely based on the Delaunay triangulation. The distinctive characteristic of the proposed method is that point positions and connections are computed simultaneously. This result is achieved by taking advantage of the sequential way in which the Bowyer-Watson algorithm computes the Delaunay triangulation. Two methods are proposed which have great geometrical flexibility, in that they allow us to treat domains of arbitrary shape and topology and to generate arbitrarily nonuniform meshes. The methods are computationally efficient and are applicable both in two and three dimensions. 11 refs.,more » 20 figs., 1 tab.« less

  7. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  8. Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model.

    PubMed

    Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J

    2006-11-01

    The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.

  9. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  10. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  11. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  12. A preliminary study on the short-term efficacy of chairside computer-aided design/computer-assisted manufacturing- generated posterior lithium disilicate crowns.

    PubMed

    Reich, Sven; Fischer, Sören; Sobotta, Bernhard; Klapper, Horst-Uwe; Gozdowski, Stephan

    2010-01-01

    The purpose of this preliminary study was to evaluate the clinical performance of chairside-generated crowns over a preliminary time period of 24 months. Forty-one posterior crowns made of a machinable lithium disilicate ceramic for full-contour crowns were inserted in 34 patients using a chairside computer-aided design/computer-assisted manufacturing technique. The crowns were evaluated at baseline and after 6, 12, and 24 months according to modified United States Public Health Service criteria. After 2 years, all reexamined crowns (n = 39) were in situ; one abutment exhibited secondary caries and two abutments received root canal treatment. Within the limited observation period, the crowns revealed clinically satisfying results.

  13. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  14. Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André

    2016-01-01

    Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…

  15. Computer Generated Holography with Intensity-Graded Patterns

    PubMed Central

    Conti, Rossella; Assayag, Osnath; de Sars, Vincent; Guillon, Marc; Emiliani, Valentina

    2016-01-01

    Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs), which modulate the spatial phase of the incident laser beam. A variety of algorithms is employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different levels of chanelrhodopsin2 (ChR2), one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light. PMID:27799896

  16. A pedagogical example of second-order arithmetic sequences applied to the construction of computer passwords by upper elementary grade students

    NASA Astrophysics Data System (ADS)

    Coggins, Porter E.

    2015-04-01

    The purpose of this paper is (1) to present how general education elementary school age students constructed computer passwords using digital root sums and second-order arithmetic sequences, (2) argue that computer password construction can be used as an engaging introduction to generate interest in elementary school students to study mathematics related to computer science, and (3) share additional mathematical ideas accessible to elementary school students that can be used to create computer passwords. This paper serves to fill a current gap in the literature regarding the integration of mathematical content accessible to upper elementary school students and aspects of computer science in general, and computer password construction in particular. In addition, the protocols presented here can serve as a hook to generate further interest in mathematics and computer science. Students learned to create a random-looking computer password by using biometric measurements of their shoe size, height, and age in months and to create a second-order arithmetic sequence, then converted the resulting numbers into characters that become their computer passwords. This password protocol can be used to introduce students to good computer password habits that can serve a foundation for a life-long awareness of data security. A refinement of the password protocol is also presented.

  17. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  18. Controller/Computer Interface with an Air-Ground Data Link

    DOT National Transportation Integrated Search

    1976-06-01

    This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...

  19. Using NCAR Yellowstone for PhotoVoltaic Power Forecasts with Artificial Neural Networks and an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.

    2016-12-01

    A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.

  20. CFD-Based Design of a Filming Injector for N+3 Combustors

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Mongia, Hukam; Lee, Phil

    2016-01-01

    An effort was undertaken to perform CFD analysis of fluid flow in Lean-Direct Injection (LDI) combustors with axial swirl-venturi elements coupled with a new fuel-filming injector design for next-generation N+3 combustors. The National Combustion Code (NCC) was used to perform non-reacting and two-phase reacting flow computations on a N+3 injector configuration, in a single-element and a five-element injector array. All computations were performed with a consistent approach towards mesh-generation, spray-, ignition- and kinetics-modeling with the NCC. Computational predictions of the aerodynamics of the injector were used to arrive at an optimal injector design that met effective area, aerodynamics, and fuel-air mixing criteria. LDI-3 emissions (EINOx, EICO and UHC) were compared with the previous generation LDI-2 combustor experimental data at representative engine cycle conditions.

  1. A PC-based generator of surface ECG potentials for computer electrocardiograph testing.

    PubMed

    Franchi, D; Palagi, G; Bedini, R

    1994-02-01

    The system is composed of an electronic circuit, connected to a PC, whose outputs, starting from ECGs digitally collected by commercial interpretative electrocardiographs, simulate virtual patients' limb and chest electrode potentials. Appropriate software manages the D/A conversion and lines up the original short-term signal in a ring buffer to generate continuous ECG traces. The device also permits the addition of artifacts and/or baseline wanders/shifts on each lead separately. The system has been accurately tested and statistical indexes have been computed to quantify the reproduction accuracy analyzing, in the generated signal, both the errors induced on the fiducial point measurements and the capability to retain the diagnostic significance. The device integrated with an annotated ECG data base constitutes a reliable and powerful system to be used in the quality assurance testing of computer electrocardiographs.

  2. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  3. A Computational Workflow for the Automated Generation of Models of Genetic Designs.

    PubMed

    Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil

    2018-06-05

    Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.

  4. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  5. Towards pattern generation and chaotic series prediction with photonic reservoir computers

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge

    2016-03-01

    Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.

  6. Computer ray tracing speeds.

    PubMed

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  7. The applicability of a computer model for predicting head injury incurred during actual motor vehicle collisions.

    PubMed

    Moran, Stephan G; Key, Jason S; McGwin, Gerald; Keeley, Jason W; Davidson, James S; Rue, Loring W

    2004-07-01

    Head injury is a significant cause of both morbidity and mortality. Motor vehicle collisions (MVCs) are the most common source of head injury in the United States. No studies have conclusively determined the applicability of computer models for accurate prediction of head injuries sustained in actual MVCs. This study sought to determine the applicability of such models for predicting head injuries sustained by MVC occupants. The Crash Injury Research and Engineering Network (CIREN) database was queried for restrained drivers who sustained a head injury. These collisions were modeled using occupant dynamic modeling (MADYMO) software, and head injury scores were generated. The computer-generated head injury scores then were evaluated with respect to the actual head injuries sustained by the occupants to determine the applicability of MADYMO computer modeling for predicting head injury. Five occupants meeting the selection criteria for the study were selected from the CIREN database. The head injury scores generated by MADYMO were lower than expected given the actual injuries sustained. In only one case did the computer analysis predict a head injury of a severity similar to that actually sustained by the occupant. Although computer modeling accurately simulates experimental crash tests, it may not be applicable for predicting head injury in actual MVCs. Many complicating factors surrounding actual MVCs make accurate computer modeling difficult. Future modeling efforts should consider variables such as age of the occupant and should account for a wider variety of crash scenarios.

  8. Examining Computer Gaming Addiction in Terms of Different Variables

    ERIC Educational Resources Information Center

    Kurt, Adile Askim; Dogan, Ezgi; Erdogmus, Yasemin Kahyaoglu; Emiroglu, Bulent Gursel

    2018-01-01

    The computer gaming addiction is one of the newer concepts that young generations face and can be defined as the excessive and problematic use of computer games leading to social and/or emotional problems. The purpose of this study is to analyse through variables the computer gaming addiction levels of secondary school students. The research was…

  9. SYMBOD - A computer program for the automatic generation of symbolic equations of motion for systems of hinge-connected rigid bodies

    NASA Technical Reports Server (NTRS)

    Macala, G. A.

    1983-01-01

    A computer program is described that can automatically generate symbolic equations of motion for systems of hinge-connected rigid bodies with tree topologies. The dynamical formulation underlying the program is outlined, and examples are given to show how a symbolic language is used to code the formulation. The program is applied to generate the equations of motion for a four-body model of the Galileo spacecraft. The resulting equations are shown to be a factor of three faster in execution time than conventional numerical subroutines.

  10. Generation of dynamo magnetic fields in protoplanetary and other astrophysical accretion disks

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Levy, E. H.

    1988-01-01

    A computational method for treating the generation of dynamo magnetic fields in astrophysical disks is presented. The numerical difficulty of handling the boundary condition at infinity in the cylindrical disk geometry is overcome by embedding the disk in a spherical computational space and matching the solutions to analytically tractable spherical functions in the surrounding space. The lowest lying dynamo normal modes for a 'thick' astrophysical disk are calculated. The generated modes found are all oscillatory and spatially localized. Tha potential implications of the results for the properties of dynamo magnetic fields in real astrophysical disks are discussed.

  11. Evaluation of experimental design and computational parameter choices affecting analyses of ChIP-seq and RNA-seq data in undomesticated poplar trees.

    Treesearch

    Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov

    2014-01-01

    Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...

  12. Unstructured mesh methods for CFD

    NASA Technical Reports Server (NTRS)

    Peraire, J.; Morgan, K.; Peiro, J.

    1990-01-01

    Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.

  13. Subpicosecond Optical Digital Computation Using Conjugate Parametric Generators

    DTIC Science & Technology

    1989-03-31

    Using Phase Conjugate Farametric Generators ..... 12. PERSONAL AUTHOR(S) Alfano, Robert- Eichmann . George; Dorsinville. Roger! Li. Yao 13a. TYPE OF...conjugation-based optical residue arithmetic processor," Y. Li, G. Eichmann , R. Dorsinville, and R. R. Alfano, Opt. Lett. 13, (1988). [2] "Parallel ultrafast...optical digital and symbolic computation via optical phase conjugation," Y. Li, G. Eichmann , R. Dorsinville, Appl. Opt. 27, 2025 (1988). [3

  14. Computer assisted generation of the matrix elements between contracted wavefunctions in a Complete Active Space scheme

    NASA Astrophysics Data System (ADS)

    Angeli, C.; Cimiraglia, R.

    2005-02-01

    Starting from a CAS-SCF calculation a sequence of contracted functions can be generated by applying strings of spin-traced replacement operators to the CAS-SCF solution. The laborious task of producing the Hamiltonian matrix elements between such functions can be substantially reduced making use of a computer algebra system. An implementation employing the MuPAD system is presented and illustrated.

  15. Rotational control of computer generated holograms.

    PubMed

    Preece, Daryl; Rubinsztein-Dunlop, Halina

    2017-11-15

    We develop a basis for three-dimensional rotation of arbitrary light fields created by computer generated holograms. By adding an extra phase function into the kinoform, any light field or holographic image can be tilted in the focal plane with minimized distortion. We present two different approaches to rotate an arbitrary hologram: the Scheimpflug method and a novel coordinate transformation method. Experimental results are presented to demonstrate the validity of both proposed methods.

  16. A Systematic Review of Tablet Computers and Portable Media Players as Speech Generating Devices for Individuals with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Lorah, Elizabeth R.; Parnell, Ashley; Whitby, Peggy Schaefer; Hantula, Donald

    2015-01-01

    Powerful, portable, off-the-shelf handheld devices, such as tablet based computers (i.e., iPad®; Galaxy®) or portable multimedia players (i.e., iPod®), can be adapted to function as speech generating devices for individuals with autism spectrum disorders or related developmental disabilities. This paper reviews the research in this new and rapidly…

  17. Generating series for GUE correlators

    NASA Astrophysics Data System (ADS)

    Dubrovin, Boris; Yang, Di

    2017-11-01

    We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.

  18. Computer-Generated Dot Maps as an Epidemiologic Tool: Investigating an Outbreak of Toxoplasmosis

    PubMed Central

    Werker, Denise H.; King, Arlene S.; Marion, Stephen A.; Bell, Alison; Issac-Renton, Judith L.; Irwin, G. Stewart; Bowie, William R.

    1999-01-01

    We used computer-generated dot maps to examine the spatial distribution of 94 Toxoplasma gondii infections associated with an outbreak in British Columbia, Canada. The incidence among patients served by one water distribution system was 3.52 times that of patients served by other sources. Acute T. gondii infection among 3,812 pregnant women was associated with the incriminated distribution system. PMID:10603218

  19. Application of fiber spectrometers for etch depth measurement of binary computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Korolkov, V. P.; Konchenko, A. S.; Poleshchuk, A. G.

    2013-01-01

    Novel spectrophotometric method of computer-generated holograms depth measurement is presented. It is based on spectral properties of binary phase multi-order gratings. An intensity of zero order is a periodical function of illumination light wave number. The grating grooves depth can be calculated as it is inversely proportional to the period. Measurement in reflection allows one to increase a phase depth of the grooves by factor of 2 and measure more precisely shallow phase gratings. Diffraction binary structures with depth from several hundreds to thousands nanometers could be measured by the method. Measurement uncertainty is mainly defined by following parameters - shifts of the spectrum maximums that are occurred due to the tilted grooves sidewalls, uncertainty of light incidence angle measurement, and spectrophotometer wavelength error. It is theoretically and experimentally shown that the method can ensure 0.25-1% error for desktop spectrophotometers. However fiber spectrometers are more convenient for creation of real measurement system with scanning measurement of large area computer-generated holograms which are used for optical testing of aspheric optics. Especially diffractive Fizeau null lenses need to be carefully tested for uniformity of etch depth. Experimental system for characterization of binary computer-generated holograms was developed using spectrophotometric unit of confocal sensor CHR-150 (STIL SA).

  20. Dispensing Processes Impact Apparent Biological Activity as Determined by Computational and Statistical Analyses

    PubMed Central

    Ekins, Sean; Olechno, Joe; Williams, Antony J.

    2013-01-01

    Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research. PMID:23658723

  1. Computational power and generative capacity of genetic systems.

    PubMed

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Remembrance of phases past: An autoregressive method for generating realistic atmospheres in simulations

    NASA Astrophysics Data System (ADS)

    Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.

    2014-08-01

    The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.

  3. Get Ready for Generation Next.

    ERIC Educational Resources Information Center

    Wellner, Alison

    1999-01-01

    "Generation Next" are the 68 million people born between 1977 and 1994. They are the first generation that has grown up with such technologies as computers, the Internet, compact disks, and microwaves and they have more education than previous generations. They will have an effect on trainers and training methods in the workplace. (JOW)

  4. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  5. AER synthetic generation in hardware for bio-inspired spiking systems

    NASA Astrophysics Data System (ADS)

    Linares-Barranco, Alejandro; Linares-Barranco, Bernabe; Jimenez-Moreno, Gabriel; Civit-Balcells, Anton

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems it is absolutely necessary to have a computer interface that allows (a) to read AER interchip traffic into the computer and visualize it on screen, and (b) convert conventional frame-based video stream in the computer into AER and inject it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. This paper addresses the problem of converting, in a computer, a conventional frame-based video stream into the spike event based representation AER. There exist several proposed software methods for synthetic generation of AER for bio-inspired systems. This paper presents a hardware implementation for one method, which is based on Linear-Feedback-Shift-Register (LFSR) pseudo-random number generation. The sequence of events generated by this hardware, which follows a Poisson distribution like a biological neuron, has been reconstructed using two AER integrator cells. The error of reconstruction for a set of images that produces different traffic loads of event in the AER bus is used as evaluation criteria. A VHDL description of the method, that includes the Xilinx PCI Core, has been implemented and tested using a general purpose PCI-AER board. This PCI-AER board has been developed by authors, and uses a Spartan II 200 FPGA. This system for AER Synthetic Generation is capable of transforming frames of 64x64 pixels, received through a standard computer PCI bus, at a frame rate of 25 frames per second, producing spike events at a peak rate of 107 events per second.

  6. Idea Generation in Student Writing: Computational Assessments and Links to Successful Writing

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Muldner, Kasia; McNamara, Danielle S.

    2016-01-01

    Idea generation is an important component of most major theories of writing. However, few studies have linked idea generation in writing samples to assessments of writing quality or examined links between linguistic features in a text and idea generation. This study uses human ratings of idea generation, such as "idea fluency, idea…

  7. Mechanism and computational model for Lyman-α-radiation generation by high-intensity-laser four-wave mixing in Kr-Ar gas

    NASA Astrophysics Data System (ADS)

    Louchev, Oleg A.; Bakule, Pavel; Saito, Norihito; Wada, Satoshi; Yokoyama, Koji; Ishida, Katsuhiko; Iwasaki, Masahiko

    2011-09-01

    We present a theoretical model combined with a computational study of a laser four-wave mixing process under optical discharge in which the non-steady-state four-wave amplitude equations are integrated with the kinetic equations of initial optical discharge and electron avalanche ionization in Kr-Ar gas. The model is validated by earlier experimental data showing strong inhibition of the generation of pulsed, tunable Lyman-α (Ly-α) radiation when using sum-difference frequency mixing of 212.6 nm and tunable infrared radiation (820-850 nm). The rigorous computational approach to the problem reveals the possibility and mechanism of strong auto-oscillations in sum-difference resonant Ly-α generation due to the combined effect of (i) 212.6-nm (2+1)-photon ionization producing initial electrons, followed by (ii) the electron avalanche dominated by 843-nm radiation, and (iii) the final breakdown of the phase matching condition. The model shows that the final efficiency of Ly-α radiation generation can achieve a value of ˜5×10-4 which is restricted by the total combined absorption of the fundamental and generated radiation.

  8. Computer-Based Linguistic Analysis.

    ERIC Educational Resources Information Center

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  9. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  10. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  11. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  12. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  13. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study

    PubMed Central

    2007-01-01

    Objective To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Design Cross sectional comparison of four computer games. Setting Research laboratories. Participants Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Main outcome measure Predicted energy expenditure, compared using repeated measures analysis of variance. Results Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Conclusions Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children. PMID:18156227

  14. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study.

    PubMed

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2007-12-22

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Research laboratories. Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  15. Generation of nanosecond neutron pulses in vacuum accelerating tubes

    NASA Astrophysics Data System (ADS)

    Didenko, A. N.; Shikanov, A. E.; Rashchikov, V. I.; Ryzhkov, V. I.; Shatokhin, V. L.

    2014-06-01

    The generation of neutron pulses with a duration of 1-100 ns using small vacuum accelerating tubes is considered. Two physical models of acceleration of short deuteron bunches in pulse neutron generators are described. The dependences of an instantaneous neutron flux in accelerating tubes on the parameters of pulse neutron generators are obtained using computer simulation. The results of experimental investigation of short-pulse neutron generators based on the accelerating tube with a vacuum-arc deuteron source, connected in the circuit with a discharge peaker, and an accelerating tube with a laser deuteron source, connected according to the Arkad'ev-Marx circuit, are given. In the experiments, the neutron yield per pulse reached 107 for a pulse duration of 10-100 ns. The resultant experimental data are in satisfactory agreement with the results of computer simulation.

  16. Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense

    DTIC Science & Technology

    1983-10-28

    Computing. By seizing an opportunity to leverage recent advances in artificial intelligence, computer science, and microelectronics, the Agency plans...occurred in many separated areas of artificial intelligence, computer science, and microelectronics. Advances in "expert system" technology now...and expert knowledge o Advances in Artificial Intelligence: Mechanization of speech recognition, vision, and natural language understanding. o

  17. Computer Based Education.

    ERIC Educational Resources Information Center

    Fauley, Franz E.

    1980-01-01

    A case study of what one company did to increase the productivity of its sales force and generate cost savings by using computer-assisted instruction to teach salespeople at regional offices. (Editor)

  18. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  19. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud

    PubMed Central

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-01-01

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969

  20. Trends in computer hardware and software.

    PubMed

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  1. Poster — Thur Eve — 74: Distributed, asynchronous, reactive dosimetric and outcomes analysis using DICOMautomaton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Haley; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.

    2014-08-15

    Many have speculated about the future of computational technology in clinical radiation oncology. It has been advocated that the next generation of computational infrastructure will improve on the current generation by incorporating richer aspects of automation, more heavily and seamlessly featuring distributed and parallel computation, and providing more flexibility toward aggregate data analysis. In this report we describe how a recently created — but currently existing — analysis framework (DICOMautomaton) incorporates these aspects. DICOMautomaton supports a variety of use cases but is especially suited for dosimetric outcomes correlation analysis, investigation and comparison of radiotherapy treatment efficacy, and dose-volume computation. Wemore » describe: how it overcomes computational bottlenecks by distributing workload across a network of machines; how modern, asynchronous computational techniques are used to reduce blocking and avoid unnecessary computation; and how issues of out-of-date data are addressed using reactive programming techniques and data dependency chains. We describe internal architecture of the software and give a detailed demonstration of how DICOMautomaton could be used to search for correlations between dosimetric and outcomes data.« less

  2. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less

  3. A method for the computational modeling of the physics of heart murmurs

    NASA Astrophysics Data System (ADS)

    Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat

    2017-05-01

    A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.

  4. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    PubMed

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  5. Structural variation discovery in the cancer genome using next generation sequencing: Computational solutions and perspectives

    PubMed Central

    Liu, Biao; Conroy, Jeffrey M.; Morrison, Carl D.; Odunsi, Adekunle O.; Qin, Maochun; Wei, Lei; Trump, Donald L.; Johnson, Candace S.; Liu, Song; Wang, Jianmin

    2015-01-01

    Somatic Structural Variations (SVs) are a complex collection of chromosomal mutations that could directly contribute to carcinogenesis. Next Generation Sequencing (NGS) technology has emerged as the primary means of interrogating the SVs of the cancer genome in recent investigations. Sophisticated computational methods are required to accurately identify the SV events and delineate their breakpoints from the massive amounts of reads generated by a NGS experiment. In this review, we provide an overview of current analytic tools used for SV detection in NGS-based cancer studies. We summarize the features of common SV groups and the primary types of NGS signatures that can be used in SV detection methods. We discuss the principles and key similarities and differences of existing computational programs and comment on unresolved issues related to this research field. The aim of this article is to provide a practical guide of relevant concepts, computational methods, software tools and important factors for analyzing and interpreting NGS data for the detection of SVs in the cancer genome. PMID:25849937

  6. The 3D Euler solutions using automated Cartesian grid generation

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.

    1993-01-01

    Viewgraphs on 3-dimensional Euler solutions using automated Cartesian grid generation are presented. Topics covered include: computational fluid dynamics (CFD) and the design cycle; Cartesian grid strategy; structured body fit; grid generation; prolate spheroid; and ONERA M6 wing.

  7. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  8. Computer tomography of the neurocranium.

    PubMed

    Liliequist, B; Forssell, A

    1976-07-01

    The experience with computer tomography of the neurocranium in 300 patients submitted for computer tomography of the brain is reported. The more appropriate projections which may be obtained with the second generation of scanners in combination with an elaborated reconstruction technique seem to constitute a replacement of conventional skull films.

  9. Exploring the Issues: Humans and Computers.

    ERIC Educational Resources Information Center

    Walsh, Huber M.

    This presentation addresses three basic social issues generated by the computer revolution. The first section, "Money Matters," focuses on the economic effects of computer technology. These include the replacement of workers by fully automated machines, the threat to professionals posed by expanded access to specialized information, and the…

  10. Workshop on Aircraft Surface Representation for Aerodynamic Computation

    NASA Technical Reports Server (NTRS)

    Gregory, T. J. (Editor); Ashbaugh, J. (Editor)

    1980-01-01

    Papers and discussions on surface representation and its integration with aerodynamics, computers, graphics, wind tunnel model fabrication, and flow field grid generation are presented. Surface definition is emphasized.

  11. Generalized pipeline for preview and rendering of synthetic holograms

    NASA Astrophysics Data System (ADS)

    Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.

    1997-04-01

    We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.

  12. Experimental quantum computing without entanglement.

    PubMed

    Lanyon, B P; Barbieri, M; Almeida, M P; White, A G

    2008-11-14

    Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.

  13. Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1992-01-01

    One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology.

  14. PIFCGT: A PIF autopilot design program for general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1983-01-01

    This report documents the PIFCGT computer program. In FORTRAN, PIFCGT is a computer design aid for determing Proportional-Integral-Filter (PIF) control laws for aircraft autopilots implemented with a Command Generator Tracker (CGT). The program uses Linear-Quadratic-Regulator synthesis algorithms to determine feedback gains, and includes software to solve the feedforward matrix equation which is useful in determining the command generator tracker feedforward gains. The program accepts aerodynamic stability derivatives and computes the corresponding aerodynamic linear model. The nine autopilot modes that can be designed include four maneuver modes (ROLL SEL, PITCH SEL, HDG SEL, ALT SEL), four final approach models (APR GS, APR LOCI, APR LOCR, APR LOCP), and a BETA HOLD mode. The program has been compiled and executed on a CDC computer.

  15. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  16. Computer interface system

    NASA Technical Reports Server (NTRS)

    Anderson, T. O. (Inventor)

    1976-01-01

    An interface logic circuit permitting the transfer of information between two computers having asynchronous clocks is disclosed. The information transfer involves utilization of control signals (including request, return-response, ready) to generate properly timed data strobe signals. Noise problems are avoided because each control signal, upon receipt, is verified by at least two clock pulses at the receiving computer. If control signals are verified, a data strobe pulse is generated to accomplish a data transfer. Once initiated, the data strobe signal is properly completed independently of signal disturbances in the control signal initiating the data strobe signal. Completion of the data strobe signal is announced by automatic turn-off of a return-response control signal.

  17. Laser Signature Prediction Using The VALUE Computer Program

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander; Hoffman, George A.; Patton, Ronald

    1989-09-01

    A variety of enhancements are being made to the 1976-vintage LASERX computer code. These include: - Surface characterization with BDRF tabular data - Specular reflection from transparent surfaces - Generation of glint direction maps - Generation of relative range imagery - Interface to the LOWTRAN atmospheric transmission code - Interface to the LEOPS laser sensor code - User friendly menu prompting for easy setup Versions of VALUE have been written for both VAX/VMS and PC/DOS computer environments. Outputs have also been revised to be user friendly and include tables, plots, and images for (1) intensity, (2) cross section,(3) reflectance, (4) relative range, (5) region type, and (6) silhouette.

  18. Grand Challenges: High Performance Computing and Communications. The FY 1992 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…

  19. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James Bradley; Betty, Rita

    Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.

  20. NASA Ames potential flow analysis (POTFAN) geometry program (POTGEM), version 1

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Bullock, R. B.

    1976-01-01

    A computer program known as POTGEM is reported which has been developed as an independent segment of a three-dimensional linearized, potential flow analysis system and which is used to generate a panel point description of arbitrary, three-dimensional bodies from convenient engineering descriptions consisting of equations and/or tables. Due to the independent, modular nature of the program, it may be used to generate corner points for other computer programs.

  1. Development of Anthropometric Analogous Headforms. Phase 1.

    DTIC Science & Technology

    1994-10-31

    shown in figure 5. This surface mesh can then be transformed into polygon faces that are able to be rendered by the AutoCAD rendering tools . Rendering of...computer-generated surfaces. The material removal techniques require the programming of the tool path of the cutter and in some cases requires specialized... tooling . Tool path programs are available to transfer the computer-generated surface into actual paths of the cutting tool . In cases where the

  2. Reduced circuit implementation of encoder and syndrome generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trager, Barry M; Winograd, Shmuel

    An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less

  3. Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation

    NASA Astrophysics Data System (ADS)

    Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri

    2017-10-01

    We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2Eg energy threshold and with QE reaching ˜1.6 at about 3Eg, where Eg is the electronic gap.

  4. Accuracy and Landmark Error Calculation Using Cone-Beam Computed Tomography–Generated Cephalograms

    PubMed Central

    Grauer, Dan; Cevidanes, Lucia S. H.; Styner, Martin A.; Heulfe, Inam; Harmon, Eric T.; Zhu, Hongtu; Proffit, William R.

    2010-01-01

    Objective To evaluate systematic differences in landmark position between cone-beam computed tomography (CBCT)–generated cephalograms and conventional digital cephalograms and to estimate how much variability should be taken into account when both modalities are used within the same longitudinal study. Materials and Methods Landmarks on homologous cone-beam computed tomographic–generated cephalograms and conventional digital cephalograms of 46 patients were digitized, registered, and compared via the Hotelling T2 test. Results There were no systematic differences between modalities in the position of most landmarks. Three landmarks showed statistically significant differences but did not reach clinical significance. A method for error calculation while combining both modalities in the same individual is presented. Conclusion In a longitudinal follow-up for assessment of treatment outcomes and growth of one individual, the error due to the combination of the two modalities might be larger than previously estimated. PMID:19905853

  5. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  6. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  7. Production and characterization of pure cryogenic inertial fusion targets

    NASA Astrophysics Data System (ADS)

    Boyd, B. A.; Kamerman, G. W.

    An experimental cryogenic inertial fusion target generator and two optical techniques for automated target inspection are described. The generator produces 100 microns diameter solid hydrogen spheres at a rate compatible with fueling requirements of conceptual inertial fusion power plants. A jet of liquified hydrogen is disrupted into droplets by an ultrasonically excited nozzle. The droplets solidify into microspheres while falling through a chamber maintained below the hydrogen triple point pressure. Stable operation of the generator has been demonstrated for up to three hours. The optical inspection techniques are computer aided photomicrography and coarse diffraction pattern analysis (CDPA). The photomicrography system uses a conventional microscope coupled to a computer by a solid state camera and digital image memory. The computer enhances the stored image and performs feature extraction to determine pellet parameters. The CDPA technique uses Fourier transform optics and a special detector array to perform optical processing of a target image.

  8. Radiative properties of flame-generated soot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeylue, U.O.; Faeth, G.M.

    1993-05-01

    Approximate methods for estimating the optical properties of flame-generated soot aggregates were evaluated using existing computer simulations and measurements in the visible and near-infrared portions of the spectrum. The following approximate methods were evaluated for both individual aggregates and polydisperse aggregate populations: the Rayleigh scattering approximation, Mie scattering for an equivalent sphere, and Rayleigh-Debye-Gans (R-D-G) scattering for both given and fractal aggregates. Results of computer simulations involved both prescribed aggregate geometry and numerically generated aggregates by cluster-cluster aggregation; multiple scattering was considered exactly using the mean-field approximation, and ignored using the R-D-G approximation. Measurements involved the angular scattering properties ofmore » soot in the postflame regions of both premixed and nonpremixed flames. The results show that available computer simulations and measurements of soot aggregate optical properties are not adequate to provide a definitive evaluation of the approximate prediction methods. 40 refs., 7 figs., 1 tab.« less

  9. Use of Computer-Generated Holograms in Security Hologram Applications

    NASA Astrophysics Data System (ADS)

    Bulanovs, A.; Bakanas, R.

    2016-10-01

    The article discusses the use of computer-generated holograms (CGHs) for the application as one of the security features in the relief-phase protective holograms. An improved method of calculating CGHs is presented, based on ray-tracing approach in the case of interference of parallel rays. Software is developed for the calculation of multilevel phase CGHs and their integration in the application of security holograms. Topology of calculated computer-generated phase holograms was recorded on the photoresist by the optical greyscale lithography. Parameters of the recorded microstructures were investigated with the help of the atomic-force microscopy (AFM) and scanning electron microscopy (SEM) methods. The results of the research have shown highly protective properties of the security elements based on CGH microstructures. In our opinion, a wide use of CGHs is very promising in the structure of complex security holograms for increasing the level of protection against counterfeit.

  10. Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation.

    PubMed

    Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri

    2017-10-21

    We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2E g energy threshold and with QE reaching ∼1.6 at about 3E g , where E g is the electronic gap.

  11. Two-boundary grid generation for the solution of the three dimensional compressible Navier-Stokes equations. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1981-01-01

    A grid generation technique called the two boundary technique is developed and applied for the solution of the three dimensional Navier-Stokes equations. The Navier-Stokes equations are transformed from a cartesian coordinate system to a computational coordinate system, and the grid generation technique provides the Jacobian matrix describing the transformation. The two boundary technique is based on algebraically defining two distinct boundaries of a flow domain and the distribution of the grid is achieved by applying functions to the uniform computational grid which redistribute the computational independent variables and consequently concentrate or disperse the grid points in the physical domain. The Navier-Stokes equations are solved using a MacCormack time-split technique. Grids and supersonic laminar flow solutions are obtained for a family of three dimensional corners and two spike-nosed bodies.

  12. Reviewing the Need for Gaming in Education to Accommodate the Net Generation

    ERIC Educational Resources Information Center

    Bekebrede, G.; Warmelink, H. J. G.; Mayer, I. S.

    2011-01-01

    There is a growing interest in the use of simulations and games in Dutch higher education. This development is based on the perception that students belong to the "gamer generation" or "net generation": a generation that has grown up with computer games and other technology affecting their preferred learning styles, social…

  13. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  14. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  15. Fast generation of three-dimensional computational boundary-conforming periodic grids of C-type. [for turbine blades and propellers

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1982-01-01

    A fast computer program, GRID3C, was developed to generate multilevel three dimensional, C type, periodic, boundary conforming grids for the calculation of realistic turbomachinery and propeller flow fields. The technique is based on two analytic functions that conformally map a cascade of semi-infinite slits to a cascade of doubly infinite strips on different Riemann sheets. Up to four consecutively refined three dimensional grids are automatically generated and permanently stored on four different computer tapes. Grid nonorthogonality is introduced by a separate coordinate shearing and stretching performed in each of three coordinate directions. The grids are easily clustered closer to the blade surface, the trailing and leading edges and the hub or shroud regions by changing appropriate input parameters. Hub and duct (or outer free boundary) have different axisymmetric shapes. A vortex sheet of arbitrary thickness emanating smoothly from the blade trailing edge is generated automatically by GRID3C. Blade cross sectional shape, chord length, twist angle, sweep angle, and dihedral angle can vary in an arbitrary smooth fashion in the spanwise direction.

  16. Improved atomistic simulation of diffusion and sorption in metal oxides

    NASA Astrophysics Data System (ADS)

    Skouras, E. D.; Burganos, V. N.; Payatakes, A. C.

    2001-01-01

    Gas diffusion and sorption on the surface of metal oxides are investigated using atomistic simulations, that make use of two different force fields for the description of the intramolecular and intermolecular interactions. MD and MC computations are presented and estimates of the mean residence time, Henry's constant, and the heat of adsorption are provided for various common gases (CO, CO2, O2, CH4, Xe), and semiconducting substrates that hold promise for gas sensor applications (SnO2, BaTiO3). Comparison is made between the performance of a simple, first generation force field (Universal) and a more detailed, second generation field (COMPASS) under the same conditions and the same assumptions regarding the generation of the working configurations. It is found that the two force fields yield qualitatively similar results in all cases examined here. However, direct comparison with experimental data reveals that the accuracy of the COMPASS-based computations is not only higher than that of the first generation force field but exceeds even that of published specialized methods, based on ab initio computations.

  17. Computer laboratory in medical education for medical students.

    PubMed

    Hercigonja-Szekeres, Mira; Marinović, Darko; Kern, Josipa

    2009-01-01

    Five generations of second year students at the Zagreb University School of Medicine were interviewed through an anonymous questionnaire on their use of personal computers, Internet, computer laboratories and computer-assisted education in general. Results show an advance in students' usage of information and communication technology during the period from 1998/99 to 2002/03. However, their positive opinion about computer laboratory depends on installed capacities: the better the computer laboratory technology, the better the students' acceptance and use of it.

  18. Computer algebra and operators

    NASA Technical Reports Server (NTRS)

    Fateman, Richard; Grossman, Robert

    1989-01-01

    The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.

  19. Beyond Moore's law: towards competitive quantum devices

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2015-05-01

    A century after the invention of quantum theory and fifty years after Bell's inequality we see the first quantum devices emerge as products that aim to be competitive with the best classical computing devices. While a universal quantum computer of non-trivial size is still out of reach there exist a number commercial and experimental devices: quantum random number generators, quantum simulators and quantum annealers. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. In my talk I will focus especially on the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems. I will review the controversy around the quantum nature of these devices and will compare them to state of the art classical algorithms. I will end with an outlook towards universal quantum computing and end with the question: which important problems that are intractable even for post-exa-scale classical computers could we expect to solve once we have a universal quantum computer?

  20. Use of Computer Kiosks for Breast Cancer Education in Five Community Settings

    ERIC Educational Resources Information Center

    Kreuter, Matthew W.; Black, Wynona J.; Friend, LaBraunna; Booker, Angela C.; Klump, Paula; Bobra, Sonal; Holt, Cheryl L.

    2006-01-01

    Finding ways to bring effective computer-based behavioral interventions to those with limited access to technology is a continuing challenge for health educators. Computer kiosks placed in community settings may help reach such populations. The "Reflections of You" kiosk generates individually tailored magazines on breast cancer and…

  1. Using a Computer Game to Reinforce Skills in Addition Basic Facts in Second Grade.

    ERIC Educational Resources Information Center

    Kraus, William H.

    1981-01-01

    A computer-generated game called Fish Chase was developed to present drill-and-practice exercises on addition facts. The subjects of the study were 19 second-grade pupils. The results indicate a computer game can be used effectively to increase proficiency with basic facts. (MP)

  2. A Framework for Understanding Physics Students' Computational Modeling Practices

    ERIC Educational Resources Information Center

    Lunk, Brandon Robert

    2012-01-01

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content…

  3. Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters

    DTIC Science & Technology

    2015-01-01

    onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F

  4. Some Measurement and Instruction Related Considerations Regarding Computer Assisted Testing.

    ERIC Educational Resources Information Center

    Oosterhof, Albert C.; Salisbury, David F.

    The Assessment Resource Center (ARC) at Florida State University provides computer assisted testing (CAT) for approximately 4,000 students each term. Computer capabilities permit a small proctoring staff to administer tests simultaneously to large numbers of students. Programs provide immediate feedback for students and generate a variety of…

  5. Using the Computer in Evolution Studies

    ERIC Educational Resources Information Center

    Mariner, James L.

    1973-01-01

    Describes a high school biology exercise in which a computer greatly reduces time spent on calculations. Genetic equilibrium demonstrated by the Hardy-Weinberg principle and the subsequent effects of violating any of its premises are more readily understood when frequencies of alleles through many generations are calculated by the computer. (JR)

  6. Connecting Kids and Computers

    ERIC Educational Resources Information Center

    Giles, Rebecca McMahon

    2006-01-01

    Exposure to cell phones, DVD players, video games, computers, digital cameras, and iPods has made today's young people more technologically advanced than those of any previous generation. As a result, parents are now concerned that their children are spending too much time in front of the computer. In this article, the author focuses her…

  7. Computer Assisted Problem Solving in an Introductory Statistics Course. Technical Report No. 56.

    ERIC Educational Resources Information Center

    Anderson, Thomas H.; And Others

    The computer assisted problem solving system (CAPS) described in this booklet administered "homework" problem sets designed to develop students' computational, estimation, and procedural skills. These skills were related to important concepts in an introductory statistics course. CAPS generated unique data, judged student performance,…

  8. Oklahoma's Mobile Computer Graphics Laboratory.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…

  9. The I-Generation--From Toddlers to Teenagers: A Conversation with Jane M. Healy.

    ERIC Educational Resources Information Center

    Tell, Carol

    2000-01-01

    In "Failure to Connect" (1998), Jane Healy examined pros and cons of computer use, warning that good teachers, small classes, and challenging curricula trump high-tech products. Computers can impede youngsters' development. Computers enhance learning only if teachers comprehend them, use appropriate applications, and define learning…

  10. A Novel Use of Computer Simulation in an Applied Pharmacokinetics Course.

    ERIC Educational Resources Information Center

    Sullivan, Timothy J.

    1982-01-01

    The use of a package of interactive computer programs designed to simulate pharmacokinetic monitoring of drug therapy in a required undergraduate applied pharmacokinetics course is described. Students were assigned the problem of maintaining therapeutic drug concentrations in a computer generated "patient" as an adjunct to classroom instruction.…

  11. Culture and Risk: Does the Future Compute? A Symposium.

    ERIC Educational Resources Information Center

    Barnes, Susan B.; Perkinson, Henry J.; Talbott, Stephen L.

    1998-01-01

    Presents a symposium on the impact of computers on culture. Argues that the computer has mathematized culture and that widespread risk aversion has been generated everywhere. Finds that the ways in which communication technologies are used in social contexts is a topic of concern to communication scholars. (PA)

  12. Business Technology Education in the Early 21st Century: The Ongoing Quest for Relevance

    ERIC Educational Resources Information Center

    Andriole, Stephen J.

    2006-01-01

    The field of information technology is changing and those responsible for educating the next generation of technology professionals have responded with a new computing curriculum, which identifies five distinct technology majors: computer engineering, computer science, software engineering, information systems and information technology.…

  13. The Instrument of the Future: Computers in Education.

    ERIC Educational Resources Information Center

    Leonard, Rex; LeCroy, Barbara

    Before computers will be able to fulfill their potential in education, two major challenges must be overcome--the lack of well-trained teachers and a lack of general knowledge about software and its capabilities. Teachers must acquire some computer literacy skills, including programming, word processing, materials generation and record keeping. In…

  14. Tenth Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion, part 1

    NASA Technical Reports Server (NTRS)

    Williams, R. W. (Compiler)

    1992-01-01

    Experimental and computational fluid dynamic activities in rocket propulsion were discussed. The workshop was an open meeting of government, industry, and academia. A broad number of topics were discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.

  15. Computer Description of the Field Artillery Ammunition Supply Vehicle

    DTIC Science & Technology

    1983-04-01

    Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and

  16. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  17. Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales

    NASA Astrophysics Data System (ADS)

    Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.

    2017-12-01

    When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.

  18. Computational Geometry and Computer-Aided Design

    NASA Technical Reports Server (NTRS)

    Fay, T. H. (Compiler); Shoosmith, J. N. (Compiler)

    1985-01-01

    Extended abstracts of papers addressing the analysis, representation, and synthesis of shape information are presented. Curves and shape control, grid generation and contouring, solid modelling, surfaces, and curve intersection are specifically addressed.

  19. Fundamentals, current state of the development of, and prospects for further improvement of the new-generation thermal-hydraulic computational HYDRA-IBRAE/LM code for simulation of fast reactor systems

    NASA Astrophysics Data System (ADS)

    Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.

    2016-02-01

    The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.

  20. Adversarial Threshold Neural Computer for Molecular de Novo Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Vanhaelen, Quentin; Ivanenkov, Yan; Aladinskaya, Anastasia V; Aliper, Alex; Zhavoronkov, Alex

    2018-03-30

    In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp 3 -rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds.

Top